1×3×300×3001×96×75×751×75×75×961×75×75×961×75×75×961×96×75×751×96×75×751×75×75×961×75×75×961×75×75×961×75×75×961×75×75×961×75×751×75×751×75×751×75×751×75×751×75×751×75×751×75×751×75×751×75×75×15625×11×75×75×15625×3841×75×75×961×75×75×961×75×75×961×75×75×961×75×75×961×75×75×961×75×75×965625×965625×965625×3845625×3845625×3845625×3845625×3841×75×75×3841×75×75×3845625×3841×75×75×3841×75×75×3841×75×75×3841×75×75×3841×75×75×3841×75×75×3841×75×75×3841×75×75×3841×75×75×3841×75×75×3841×1×1×3841×1×1×3841×1×1×11×1×1×11×1×1×3841×75×75×3841×75×75×3841×75×75×3841×75×75×3841×75×75×3841×75×75×3841×75×75×3841×75×751×75×751×75×751×75×751×75×751×75×751×75×751×75×751×75×751×75×75×15625×11×75×75×15625×961×75×75×3841×75×75×3841×75×75×3841×75×75×3841×75×75×3841×75×75×3841×75×75×3845625×3845625×3845625×965625×965625×965625×965625×961×75×75×961×75×75×965625×961×75×75×961×75×75×961×75×75×961×96×75×751×96×75×751×75×75×961×75×75×961×75×75×961×75×75×961×75×75×961×75×751×75×751×75×751×75×751×75×751×75×751×75×751×75×751×75×751×75×75×15625×11×75×75×15625×3841×75×75×961×75×75×961×75×75×961×75×75×961×75×75×961×75×75×961×75×75×965625×965625×965625×3845625×3845625×3845625×3845625×3841×75×75×3841×75×75×3845625×3841×75×75×3841×75×75×3841×75×75×3841×75×75×3841×75×75×3841×75×75×3841×75×75×3841×75×75×3841×75×75×3841×75×75×3841×1×1×3841×1×1×3841×1×1×11×1×1×11×1×1×3841×75×75×3841×75×75×3841×75×75×3841×75×75×3841×75×75×3841×75×75×3841×75×75×3841×75×751×75×751×75×751×75×751×75×751×75×751×75×751×75×751×75×751×75×75×15625×11×75×75×15625×961×75×75×3841×75×75×3841×75×75×3841×75×75×3841×75×75×3841×75×75×3841×75×75×3845625×3845625×3845625×965625×965625×965625×965625×961×75×75×961×75×75×965625×961×75×75×961×75×75×961×75×75×961×96×75×751×96×75×751×75×75×961×75×75×961×75×75×961×75×75×961×75×75×961×75×751×75×751×75×751×75×751×75×751×75×751×75×751×75×751×75×751×75×75×15625×11×75×75×15625×3841×75×75×961×75×75×961×75×75×961×75×75×961×75×75×961×75×75×961×75×75×965625×965625×965625×3845625×3845625×3845625×3845625×3841×75×75×3841×75×75×3845625×3841×75×75×3841×75×75×3841×75×75×3841×75×75×3841×75×75×3841×75×75×3841×75×75×3841×75×75×3841×75×75×3841×75×75×3841×1×1×3841×1×1×3841×1×1×11×1×1×11×1×1×3841×75×75×3841×75×75×3841×75×75×3841×75×75×3841×75×75×3841×75×75×3841×75×75×3841×75×751×75×751×75×751×75×751×75×751×75×751×75×751×75×751×75×751×75×75×15625×11×75×75×15625×961×75×75×3841×75×75×3841×75×75×3841×75×75×3841×75×75×3841×75×75×3841×75×75×3845625×3845625×3845625×965625×965625×965625×965625×961×75×75×961×75×75×965625×961×75×75×961×75×75×961×75×75×961×96×75×751×192×37×371×192×37×371×192×37×371×37×37×1921×37×37×1921×37×37×1921×37×37×1921×37×37×1921×37×371×37×371×37×371×37×371×37×371×37×371×37×371×37×371×37×371×37×37×11369×11×37×37×11369×7681×37×37×1921×37×37×1921×37×37×1921×37×37×1921×37×37×1921×37×37×1921×37×37×1921369×1921369×1921369×7681369×7681369×7681369×7681369×7681×37×37×7681×37×37×7681369×7681×37×37×7681×37×37×7681×37×37×7681×37×37×7681×37×37×7681×37×37×7681×37×37×7681×37×37×7681×37×37×7681×37×37×7681×1×1×7681×1×1×7681×1×1×11×1×1×11×1×1×7681×37×37×7681×37×37×7681×37×37×7681×37×37×7681×37×37×7681×37×37×7681×37×37×7681×37×371×37×371×37×371×37×371×37×371×37×371×37×371×37×371×37×371×37×37×11369×11×37×37×11369×1921×37×37×7681×37×37×7681×37×37×7681×37×37×7681×37×37×7681×37×37×7681×37×37×7681369×7681369×7681369×1921369×1921369×1921369×1921369×1921×37×37×1921×37×37×1921369×1921×37×37×1921×192×37×371×192×37×371×192×37×371×192×37×371×37×37×1921×37×37×1921×37×37×1921×37×37×1921×37×37×1921×37×371×37×371×37×371×37×371×37×371×37×371×37×371×37×371×37×371×37×37×11369×11×37×37×11369×7681×37×37×1921×37×37×1921×37×37×1921×37×37×1921×37×37×1921×37×37×1921×37×37×1921369×1921369×1921369×7681369×7681369×7681369×7681369×7681×37×37×7681×37×37×7681369×7681×37×37×7681×37×37×7681×37×37×7681×37×37×7681×37×37×7681×37×37×7681×37×37×7681×37×37×7681×37×37×7681×37×37×7681×1×1×7681×1×1×7681×1×1×11×1×1×11×1×1×7681×37×37×7681×37×37×7681×37×37×7681×37×37×7681×37×37×7681×37×37×7681×37×37×7681×37×371×37×371×37×371×37×371×37×371×37×371×37×371×37×371×37×371×37×37×11369×11×37×37×11369×1921×37×37×7681×37×37×7681×37×37×7681×37×37×7681×37×37×7681×37×37×7681×37×37×7681369×7681369×7681369×1921369×1921369×1921369×1921369×1921×37×37×1921×37×37×1921369×1921×37×37×1921×192×37×371×192×37×371×192×37×371×192×37×371×37×37×1921×37×37×1921×37×37×1921×37×37×1921×37×37×1921×37×371×37×371×37×371×37×371×37×371×37×371×37×371×37×371×37×371×37×37×11369×11×37×37×11369×7681×37×37×1921×37×37×1921×37×37×1921×37×37×1921×37×37×1921×37×37×1921×37×37×1921369×1921369×1921369×7681369×7681369×7681369×7681369×7681×37×37×7681×37×37×7681369×7681×37×37×7681×37×37×7681×37×37×7681×37×37×7681×37×37×7681×37×37×7681×37×37×7681×37×37×7681×37×37×7681×37×37×7681×1×1×7681×1×1×7681×1×1×11×1×1×11×1×1×7681×37×37×7681×37×37×7681×37×37×7681×37×37×7681×37×37×7681×37×37×7681×37×37×7681×37×371×37×371×37×371×37×371×37×371×37×371×37×371×37×371×37×371×37×37×11369×11×37×37×11369×1921×37×37×7681×37×37×7681×37×37×7681×37×37×7681×37×37×7681×37×37×7681×37×37×7681369×7681369×7681369×1921369×1921369×1921369×1921369×1921×37×37×1921×37×37×1921369×1921×37×37×1921×192×37×371×192×37×371×37×37×1921×37×37×1921×192×37×371×384×18×181×384×18×181×384×18×181×18×18×3841×18×18×3841×18×18×3841×18×18×3841×18×18×3841×18×181×18×181×18×181×18×181×18×181×18×181×18×181×18×181×18×181×18×18×1324×11×18×18×1324×15361×18×18×3841×18×18×3841×18×18×3841×18×18×3841×18×18×3841×18×18×3841×18×18×384324×384324×384324×1536324×1536324×1536324×1536324×15361×18×18×15361×18×18×1536324×15361×18×18×15361×18×18×15361×18×18×15361×18×18×15361×18×18×15361×18×18×15361×18×18×15361×18×18×15361×18×18×15361×18×18×15361×1×1×15361×1×1×15361×1×1×11×1×1×11×1×1×15361×18×18×15361×18×18×15361×18×18×15361×18×18×15361×18×18×15361×18×18×15361×18×18×15361×18×181×18×181×18×181×18×181×18×181×18×181×18×181×18×181×18×181×18×18×1324×11×18×18×1324×3841×18×18×15361×18×18×15361×18×18×15361×18×18×15361×18×18×15361×18×18×15361×18×18×1536324×1536324×1536324×384324×384324×384324×384324×3841×18×18×3841×18×18×384324×3841×18×18×3841×384×18×181×384×18×181×384×18×181×384×18×181×18×18×3841×18×18×3841×18×18×3841×18×18×3841×18×18×3841×18×181×18×181×18×181×18×181×18×181×18×181×18×181×18×181×18×181×18×18×1324×11×18×18×1324×15361×18×18×3841×18×18×3841×18×18×3841×18×18×3841×18×18×3841×18×18×3841×18×18×384324×384324×384324×1536324×1536324×1536324×1536324×15361×18×18×15361×18×18×1536324×15361×18×18×15361×18×18×15361×18×18×15361×18×18×15361×18×18×15361×18×18×15361×18×18×15361×18×18×15361×18×18×15361×18×18×15361×1×1×15361×1×1×15361×1×1×11×1×1×11×1×1×15361×18×18×15361×18×18×15361×18×18×15361×18×18×15361×18×18×15361×18×18×15361×18×18×15361×18×181×18×181×18×181×18×181×18×181×18×181×18×181×18×181×18×181×18×18×1324×11×18×18×1324×3841×18×18×15361×18×18×15361×18×18×15361×18×18×15361×18×18×15361×18×18×15361×18×18×1536324×1536324×1536324×384324×384324×384324×384324×3841×18×18×3841×18×18×384324×3841×18×18×3841×384×18×181×384×18×181×384×18×181×384×18×181×18×18×3841×18×18×3841×18×18×3841×18×18×3841×18×18×3841×18×181×18×181×18×181×18×181×18×181×18×181×18×181×18×181×18×181×18×18×1324×11×18×18×1324×15361×18×18×3841×18×18×3841×18×18×3841×18×18×3841×18×18×3841×18×18×3841×18×18×384324×384324×384324×1536324×1536324×1536324×1536324×15361×18×18×15361×18×18×1536324×15361×18×18×15361×18×18×15361×18×18×15361×18×18×15361×18×18×15361×18×18×15361×18×18×15361×18×18×15361×18×18×15361×18×18×15361×1×1×15361×1×1×15361×1×1×11×1×1×11×1×1×15361×18×18×15361×18×18×15361×18×18×15361×18×18×15361×18×18×15361×18×18×15361×18×18×15361×18×181×18×181×18×181×18×181×18×181×18×181×18×181×18×181×18×181×18×18×1324×11×18×18×1324×3841×18×18×15361×18×18×15361×18×18×15361×18×18×15361×18×18×15361×18×18×15361×18×18×1536324×1536324×1536324×384324×384324×384324×384324×3841×18×18×3841×18×18×384324×3841×18×18×3841×384×18×181×384×18×181×384×18×181×384×18×181×18×18×3841×18×18×3841×18×18×3841×18×18×3841×18×18×3841×18×181×18×181×18×181×18×181×18×181×18×181×18×181×18×181×18×181×18×18×1324×11×18×18×1324×15361×18×18×3841×18×18×3841×18×18×3841×18×18×3841×18×18×3841×18×18×3841×18×18×384324×384324×384324×1536324×1536324×1536324×1536324×15361×18×18×15361×18×18×1536324×15361×18×18×15361×18×18×15361×18×18×15361×18×18×15361×18×18×15361×18×18×15361×18×18×15361×18×18×15361×18×18×15361×18×18×15361×1×1×15361×1×1×15361×1×1×11×1×1×11×1×1×15361×18×18×15361×18×18×15361×18×18×15361×18×18×15361×18×18×15361×18×18×15361×18×18×15361×18×181×18×181×18×181×18×181×18×181×18×181×18×181×18×181×18×181×18×18×1324×11×18×18×1324×3841×18×18×15361×18×18×15361×18×18×15361×18×18×15361×18×18×15361×18×18×15361×18×18×1536324×1536324×1536324×384324×384324×384324×384324×3841×18×18×3841×18×18×384324×3841×18×18×3841×384×18×181×384×18×181×384×18×181×384×18×181×18×18×3841×18×18×3841×18×18×3841×18×18×3841×18×18×3841×18×181×18×181×18×181×18×181×18×181×18×181×18×181×18×181×18×181×18×18×1324×11×18×18×1324×15361×18×18×3841×18×18×3841×18×18×3841×18×18×3841×18×18×3841×18×18×3841×18×18×384324×384324×384324×1536324×1536324×1536324×1536324×15361×18×18×15361×18×18×1536324×15361×18×18×15361×18×18×15361×18×18×15361×18×18×15361×18×18×15361×18×18×15361×18×18×15361×18×18×15361×18×18×15361×18×18×15361×1×1×15361×1×1×15361×1×1×11×1×1×11×1×1×15361×18×18×15361×18×18×15361×18×18×15361×18×18×15361×18×18×15361×18×18×15361×18×18×15361×18×181×18×181×18×181×18×181×18×181×18×181×18×181×18×181×18×181×18×18×1324×11×18×18×1324×3841×18×18×15361×18×18×15361×18×18×15361×18×18×15361×18×18×15361×18×18×15361×18×18×1536324×1536324×1536324×384324×384324×384324×384324×3841×18×18×3841×18×18×384324×3841×18×18×3841×384×18×181×384×18×181×384×18×181×384×18×181×18×18×3841×18×18×3841×18×18×3841×18×18×3841×18×18×3841×18×181×18×181×18×181×18×181×18×181×18×181×18×181×18×181×18×181×18×18×1324×11×18×18×1324×15361×18×18×3841×18×18×3841×18×18×3841×18×18×3841×18×18×3841×18×18×3841×18×18×384324×384324×384324×1536324×1536324×1536324×1536324×15361×18×18×15361×18×18×1536324×15361×18×18×15361×18×18×15361×18×18×15361×18×18×15361×18×18×15361×18×18×15361×18×18×15361×18×18×15361×18×18×15361×18×18×15361×1×1×15361×1×1×15361×1×1×11×1×1×11×1×1×15361×18×18×15361×18×18×15361×18×18×15361×18×18×15361×18×18×15361×18×18×15361×18×18×15361×18×181×18×181×18×181×18×181×18×181×18×181×18×181×18×181×18×181×18×18×1324×11×18×18×1324×3841×18×18×15361×18×18×15361×18×18×15361×18×18×15361×18×18×15361×18×18×15361×18×18×1536324×1536324×1536324×384324×384324×384324×384324×3841×18×18×3841×18×18×384324×3841×18×18×3841×384×18×181×384×18×181×384×18×181×384×18×181×18×18×3841×18×18×3841×18×18×3841×18×18×3841×18×18×3841×18×181×18×181×18×181×18×181×18×181×18×181×18×181×18×181×18×181×18×18×1324×11×18×18×1324×15361×18×18×3841×18×18×3841×18×18×3841×18×18×3841×18×18×3841×18×18×3841×18×18×384324×384324×384324×1536324×1536324×1536324×1536324×15361×18×18×15361×18×18×1536324×15361×18×18×15361×18×18×15361×18×18×15361×18×18×15361×18×18×15361×18×18×15361×18×18×15361×18×18×15361×18×18×15361×18×18×15361×1×1×15361×1×1×15361×1×1×11×1×1×11×1×1×15361×18×18×15361×18×18×15361×18×18×15361×18×18×15361×18×18×15361×18×18×15361×18×18×15361×18×181×18×181×18×181×18×181×18×181×18×181×18×181×18×181×18×181×18×18×1324×11×18×18×1324×3841×18×18×15361×18×18×15361×18×18×15361×18×18×15361×18×18×15361×18×18×15361×18×18×1536324×1536324×1536324×384324×384324×384324×384324×3841×18×18×3841×18×18×384324×3841×18×18×3841×384×18×181×384×18×181×384×18×181×384×18×181×18×18×3841×18×18×3841×18×18×3841×18×18×3841×18×18×3841×18×181×18×181×18×181×18×181×18×181×18×181×18×181×18×181×18×181×18×18×1324×11×18×18×1324×15361×18×18×3841×18×18×3841×18×18×3841×18×18×3841×18×18×3841×18×18×3841×18×18×384324×384324×384324×1536324×1536324×1536324×1536324×15361×18×18×15361×18×18×1536324×15361×18×18×15361×18×18×15361×18×18×15361×18×18×15361×18×18×15361×18×18×15361×18×18×15361×18×18×15361×18×18×15361×18×18×15361×1×1×15361×1×1×15361×1×1×11×1×1×11×1×1×15361×18×18×15361×18×18×15361×18×18×15361×18×18×15361×18×18×15361×18×18×15361×18×18×15361×18×181×18×181×18×181×18×181×18×181×18×181×18×181×18×181×18×181×18×18×1324×11×18×18×1324×3841×18×18×15361×18×18×15361×18×18×15361×18×18×15361×18×18×15361×18×18×15361×18×18×1536324×1536324×1536324×384324×384324×384324×384324×3841×18×18×3841×18×18×384324×3841×18×18×3841×384×18×181×384×18×181×384×18×181×384×18×181×18×18×3841×18×18×3841×18×18×3841×18×18×3841×18×18×3841×18×181×18×181×18×181×18×181×18×181×18×181×18×181×18×181×18×181×18×18×1324×11×18×18×1324×15361×18×18×3841×18×18×3841×18×18×3841×18×18×3841×18×18×3841×18×18×3841×18×18×384324×384324×384324×1536324×1536324×1536324×1536324×15361×18×18×15361×18×18×1536324×15361×18×18×15361×18×18×15361×18×18×15361×18×18×15361×18×18×15361×18×18×15361×18×18×15361×18×18×15361×18×18×15361×18×18×15361×1×1×15361×1×1×15361×1×1×11×1×1×11×1×1×15361×18×18×15361×18×18×15361×18×18×15361×18×18×15361×18×18×15361×18×18×15361×18×18×15361×18×181×18×181×18×181×18×181×18×181×18×181×18×181×18×181×18×181×18×18×1324×11×18×18×1324×3841×18×18×15361×18×18×15361×18×18×15361×18×18×15361×18×18×15361×18×18×15361×18×18×1536324×1536324×1536324×384324×384324×384324×384324×3841×18×18×3841×18×18×384324×3841×18×18×3841×384×18×181×384×18×181×18×18×3841×18×18×3841×384×18×181×768×9×91×768×9×91×768×9×91×9×9×7681×9×9×7681×9×9×7681×9×9×7681×9×9×7681×9×91×9×91×9×91×9×91×9×91×9×91×9×91×9×91×9×91×9×9×181×11×9×9×181×30721×9×9×7681×9×9×7681×9×9×7681×9×9×7681×9×9×7681×9×9×7681×9×9×76881×76881×76881×307281×307281×307281×307281×30721×9×9×30721×9×9×307281×30721×9×9×30721×9×9×30721×9×9×30721×9×9×30721×9×9×30721×9×9×30721×9×9×30721×9×9×30721×9×9×30721×9×9×30721×1×1×30721×1×1×30721×1×1×11×1×1×11×1×1×30721×9×9×30721×9×9×30721×9×9×30721×9×9×30721×9×9×30721×9×9×30721×9×9×30721×9×91×9×91×9×91×9×91×9×91×9×91×9×91×9×91×9×91×9×9×181×11×9×9×181×7681×9×9×30721×9×9×30721×9×9×30721×9×9×30721×9×9×30721×9×9×30721×9×9×307281×307281×307281×76881×76881×76881×76881×7681×9×9×7681×9×9×76881×7681×9×9×7681×768×9×91×768×9×91×768×9×91×768×9×91×9×9×7681×9×9×7681×9×9×7681×9×9×7681×9×9×7681×9×91×9×91×9×91×9×91×9×91×9×91×9×91×9×91×9×91×9×9×181×11×9×9×181×30721×9×9×7681×9×9×7681×9×9×7681×9×9×7681×9×9×7681×9×9×7681×9×9×76881×76881×76881×307281×307281×307281×307281×30721×9×9×30721×9×9×307281×30721×9×9×30721×9×9×30721×9×9×30721×9×9×30721×9×9×30721×9×9×30721×9×9×30721×9×9×30721×9×9×30721×9×9×30721×1×1×30721×1×1×30721×1×1×11×1×1×11×1×1×30721×9×9×30721×9×9×30721×9×9×30721×9×9×30721×9×9×30721×9×9×30721×9×9×30721×9×91×9×91×9×91×9×91×9×91×9×91×9×91×9×91×9×91×9×9×181×11×9×9×181×7681×9×9×30721×9×9×30721×9×9×30721×9×9×30721×9×9×30721×9×9×30721×9×9×307281×307281×307281×76881×76881×76881×76881×7681×9×9×7681×9×9×76881×7681×9×9×7681×768×9×91×768×9×91×768×9×91×768×9×91×9×9×7681×9×9×7681×9×9×7681×9×9×7681×9×9×7681×9×91×9×91×9×91×9×91×9×91×9×91×9×91×9×91×9×91×9×9×181×11×9×9×181×30721×9×9×7681×9×9×7681×9×9×7681×9×9×7681×9×9×7681×9×9×7681×9×9×76881×76881×76881×307281×307281×307281×307281×30721×9×9×30721×9×9×307281×30721×9×9×30721×9×9×30721×9×9×30721×9×9×30721×9×9×30721×9×9×30721×9×9×30721×9×9×30721×9×9×30721×9×9×30721×1×1×30721×1×1×30721×1×1×11×1×1×11×1×1×30721×9×9×30721×9×9×30721×9×9×30721×9×9×30721×9×9×30721×9×9×30721×9×9×30721×9×91×9×91×9×91×9×91×9×91×9×91×9×91×9×91×9×91×9×9×181×11×9×9×181×7681×9×9×30721×9×9×30721×9×9×30721×9×9×30721×9×9×30721×9×9×30721×9×9×307281×307281×307281×76881×76881×76881×76881×7681×9×9×7681×9×9×76881×7681×9×9×7681×768×9×91×768×9×91×768×1×17681×7681×7681×7681×7681111111111×11×11×11×111601×7681×7681×7681×7681×7681×7681×7681×7681×111601×111601×111601×111601×111601×111601×111601×111601×11160inputfloat32[1,3,300,300]SequenceEmpty_inlfunc__aten_as_strided_onnx_n4Reshapefloat32[96]data〈96〉int64[4]shape〈4〉Reshapefloat32[96]data〈96〉int64[4]shape〈4〉Reshapefloat32[96]data〈96〉int64[4]shape〈4〉Reshapefloat32[192]data〈192〉int64[4]shape〈4〉Reshapefloat32[192]data〈192〉int64[4]shape〈4〉Reshapefloat32[192]data〈192〉int64[4]shape〈4〉Reshapefloat32[192]data〈192〉int64[4]shape〈4〉Reshapefloat32[384]data〈384〉int64[4]shape〈4〉Reshapefloat32[384]data〈384〉int64[4]shape〈4〉Reshapefloat32[384]data〈384〉int64[4]shape〈4〉Reshapefloat32[384]data〈384〉int64[4]shape〈4〉Reshapefloat32[384]data〈384〉int64[4]shape〈4〉Reshapefloat32[384]data〈384〉int64[4]shape〈4〉Reshapefloat32[384]data〈384〉int64[4]shape〈4〉Reshapefloat32[384]data〈384〉int64[4]shape〈4〉Reshapefloat32[384]data〈384〉int64[4]shape〈4〉Reshapefloat32[384]data〈384〉int64[4]shape〈4〉Reshapefloat32[768]data〈768〉int64[4]shape〈4〉Reshapefloat32[768]data〈768〉int64[4]shape〈4〉Reshapefloat32[768]data〈768〉int64[4]shape〈4〉Reshapefloat32[768]data〈768〉int64[4]shape〈4〉DynamicQuantizeLinearinput_QuantizeLinearReshapefloat32[96]data〈96〉int64[4]shape〈4〉loop_bodyShow GraphLoop_inlfunc__aten_as_strided_onnx_n6int64M = 4〈…〉body:loop_bodyShow GraphMultorch_nn_modules_container_Sequential_stem_1_0_torch_nn_modules_conv_Conv2d_stem_0_1_0_Conv_0_quant_scales_mulfloat32B = 0.00165145…ConvIntegertorch_nn_modules_container_Sequential_stem_1_0_torch_nn_modules_conv_Conv2d_stem_0_1_0_Conv_0_quantuint8[96,3,4,4]w〈96×3×4×4〉uint8w_zero_point = 117Add_inlfunc__aten_as_strided_onnx_n11int64B = 0Casttorch_nn_modules_container_Sequential_stem_1_0_stem_0_1_output_quantized_castMultorch_nn_modules_container_Sequential_stem_1_0_torch_nn_modules_conv_Conv2d_stem_0_1_0_Conv_0_quant_output_scale_mulAddtorch_nn_modules_container_Sequential_stem_1_0_stem_0_1_bias_addTranspose_inlfunc_timm_layers_norm_LayerNorm2d_stem_1_1_Transpose_0LayerNormalization_inlfunc_timm_layers_norm_LayerNorm2d_stem_1_1_LayerNormalization_1float32[96]Scale〈96〉float32[96]B〈96〉Transpose_inlfunc_timm_layers_norm_LayerNorm2d_stem_1_1_Transpose_2DynamicQuantizeLinearstem_1_QuantizeLinearMul_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___0___blocks_0_1_torch_nn_modules_conv_Conv2d_getattr_getattr_L__self___stages___0___blocks___0___conv_dw_1_0_Conv_0_quant_scales_mulfloat32B = 0.00480876…ConvInteger_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___0___blocks_0_1_torch_nn_modules_conv_Conv2d_getattr_getattr_L__self___stages___0___blocks___0___conv_dw_1_0_Conv_0_quantuint8[96,1,7,7]w〈96×1×7×7〉uint8w_zero_point = 130Cast_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___0___blocks_0_1_getattr_getattr_l__self___stages___0___blocks___0___conv_dw_1_output_quantized_castMul_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___0___blocks_0_1_torch_nn_modules_conv_Conv2d_getattr_getattr_L__self___stages___0___blocks___0___conv_dw_1_0_Conv_0_quant_output_scale_mulAdd_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___0___blocks_0_1_getattr_getattr_l__self___stages___0___blocks___0___conv_dw_1_bias_addTranspose_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___0___blocks_0_1_Transpose_1LayerNormalization_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___0___blocks_0_1_timm_layers_norm_LayerNorm_getattr_getattr_L__self___stages___0___blocks___0___norm_1_2_LayerNormalization_0float32[96]Scale〈96〉float32[96]B〈96〉Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___0___mlp_fc1_1_Reshape_23int64[4]shape〈4〉ReduceMin_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___0___mlp_fc1_1_aten_amin_25_n0int64[1]axes〈1〉ReduceMax_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___0___mlp_fc1_1_aten_amax_27_n0int64[1]axes〈1〉Min_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___0___mlp_fc1_1_aten_minimum_32_n0〈…〉Max_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___0___mlp_fc1_1_aten_maximum_37_n0〈…〉Neg_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___0___mlp_fc1_1_aten_neg_38_n0Max_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___0___mlp_fc1_1_aten_maximum_39_n0Div_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___0___mlp_fc1_1_aten_div_41_n0float32B = 127Max_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___0___mlp_fc1_1_Max_48〈…〉Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___0___mlp_fc1_1_Reshape_54int64[4]shape〈4〉Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___0___mlp_fc1_1_Reshape_80int64[2]shape〈2〉Reciprocal_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___0___mlp_fc1_1_aten_reciprocal_58_n0Expand_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___0___mlp_fc1_1_aten_expand_82_n2int64[2]shape〈2〉Mul_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___0___mlp_fc1_1_Mul_61Round_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___0___mlp_fc1_1_Round_62Add_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___0___mlp_fc1_1_Add_63float32[1,75,75,1]B〈1×75×75×1〉Max_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___0___mlp_fc1_1_Max_68〈…〉Min_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___0___mlp_fc1_1_Min_69〈…〉Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___0___mlp_fc1_1_Reshape_72int64[4]shape〈4〉Cast_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___0___mlp_fc1_1_Cast_73Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___0___mlp_fc1_1_Reshape_77int64[2]shape〈2〉Cast_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___0___mlp_fc1_1_Cast_83DynamicQuantizeLinear_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___0___mlp_fc1_1__to_copy_1_QuantizeLinearMul_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___0___mlp_fc1_1_MatMul_85_quant_scales_mulfloat32B = 1MatMulInteger_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___0___mlp_fc1_1_MatMul_85_quantuint8[96,384]B〈96×384〉uint8b_zero_point = 128Cast_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___0___mlp_fc1_1_mm_output_quantized_castMul_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___0___mlp_fc1_1_MatMul_85_quant_output_scale_mulCast_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___0___mlp_fc1_1_Cast_86Cast_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___0___mlp_fc1_1_Cast_87Mul_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___0___mlp_fc1_1_Mul_88Mul_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___0___mlp_fc1_1_Mul_89float32[384]B〈384〉Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___0___mlp_fc1_1_Reshape_92int64[4]shape〈4〉Add_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___0___mlp_fc1_1_Add_93float32[384]B〈384〉Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___0___mlp_fc1_1_Reshape_96int64[2]shape〈2〉Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___0___mlp_fc1_1_Reshape_99int64[4]shape〈4〉Div_inlfunc__aten_gelu_approximate_none|folded_0_n2float32B = 1.41421353…Erf_inlfunc__aten_gelu_approximate_none|folded_0_n3Add_inlfunc__aten_gelu_approximate_none|folded_0_n6float32B = 1Mul_inlfunc__aten_gelu_approximate_none|folded_0_n7Mul_inlfunc__aten_gelu_approximate_none|folded_0_n10float32A = 0.5Abs_inlfunc__aten_linalg_vector_norm_onnx|folded_0_n4ReduceL2_inlfunc__aten_linalg_vector_norm_onnx|folded_0_n8_n1_n3_n3_n3_n0int64[2]axes〈2〉ReduceMean_inlfunc_timm_layers_grn_GlobalResponseNorm_getattr_getattr_L__self___stages___0___blocks___0___mlp_grn_1_ReduceMean_9int64[1]axes〈1〉Add_inlfunc_timm_layers_grn_GlobalResponseNorm_getattr_getattr_L__self___stages___0___blocks___0___mlp_grn_1_Add_11float32B = 9.99999997…Div_inlfunc_timm_layers_grn_GlobalResponseNorm_getattr_getattr_L__self___stages___0___blocks___0___mlp_grn_1_aten_div_12_n0Mul_inlfunc_timm_layers_grn_GlobalResponseNorm_getattr_getattr_L__self___stages___0___blocks___0___mlp_grn_1_Mul_19Mul_inlfunc_aten_addcmul|folded_0_n3float32[1,1,1,384]A〈1×1×1×384〉Add_inlfunc_aten_addcmul|folded_0_n4float32[1,1,1,384]A〈1×1×1×384〉Add_inlfunc_timm_layers_grn_GlobalResponseNorm_getattr_getattr_L__self___stages___0___blocks___0___mlp_grn_1_Add_21Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___0___mlp_fc2_1_Reshape_23int64[4]shape〈4〉ReduceMin_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___0___mlp_fc2_1_aten_amin_25_n0int64[1]axes〈1〉ReduceMax_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___0___mlp_fc2_1_aten_amax_27_n0int64[1]axes〈1〉Min_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___0___mlp_fc2_1_aten_minimum_32_n0〈…〉Max_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___0___mlp_fc2_1_aten_maximum_37_n0〈…〉Neg_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___0___mlp_fc2_1_aten_neg_38_n0Max_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___0___mlp_fc2_1_aten_maximum_39_n0Div_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___0___mlp_fc2_1_aten_div_41_n0float32B = 127Max_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___0___mlp_fc2_1_Max_48〈…〉Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___0___mlp_fc2_1_Reshape_54int64[4]shape〈4〉Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___0___mlp_fc2_1_Reshape_80int64[2]shape〈2〉Reciprocal_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___0___mlp_fc2_1_aten_reciprocal_58_n0Expand_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___0___mlp_fc2_1_aten_expand_82_n2int64[2]shape〈2〉Mul_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___0___mlp_fc2_1_Mul_61Round_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___0___mlp_fc2_1_Round_62Add_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___0___mlp_fc2_1_Add_63float32[1,75,75,1]B〈1×75×75×1〉Max_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___0___mlp_fc2_1_Max_68〈…〉Min_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___0___mlp_fc2_1_Min_69〈…〉Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___0___mlp_fc2_1_Reshape_72int64[4]shape〈4〉Cast_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___0___mlp_fc2_1_Cast_73Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___0___mlp_fc2_1_Reshape_77int64[2]shape〈2〉Cast_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___0___mlp_fc2_1_Cast_83DynamicQuantizeLinear_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___0___mlp_fc2_1__to_copy_5_QuantizeLinearMul_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___0___mlp_fc2_1_MatMul_85_quant_scales_mulfloat32B = 1MatMulInteger_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___0___mlp_fc2_1_MatMul_85_quantuint8[384,96]B〈384×96〉uint8b_zero_point = 128Cast_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___0___mlp_fc2_1_mm_1_output_quantized_castMul_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___0___mlp_fc2_1_MatMul_85_quant_output_scale_mulCast_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___0___mlp_fc2_1_Cast_86Cast_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___0___mlp_fc2_1_Cast_87Mul_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___0___mlp_fc2_1_Mul_88Mul_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___0___mlp_fc2_1_Mul_89float32[96]B〈96〉Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___0___mlp_fc2_1_Reshape_92int64[4]shape〈4〉Add_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___0___mlp_fc2_1_Add_93float32[96]B〈96〉Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___0___mlp_fc2_1_Reshape_96int64[2]shape〈2〉Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___0___mlp_fc2_1_Reshape_99int64[4]shape〈4〉Add_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___0___blocks_0_1_Add_5TransposeTransposeDynamicQuantizeLinear_inlfunc_torch_nn_modules_container_Sequential_getattr_L__self___stages___0___blocks_1_getattr_l__self___stages___0___blocks_0_1_QuantizeLinearMul_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___0___blocks_1_1_torch_nn_modules_conv_Conv2d_getattr_getattr_L__self___stages___0___blocks___1___conv_dw_1_0_Conv_0_quant_scales_mulfloat32B = 0.00389496…ConvInteger_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___0___blocks_1_1_torch_nn_modules_conv_Conv2d_getattr_getattr_L__self___stages___0___blocks___1___conv_dw_1_0_Conv_0_quantuint8[96,1,7,7]w〈96×1×7×7〉uint8w_zero_point = 123Cast_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___0___blocks_1_1_getattr_getattr_l__self___stages___0___blocks___1___conv_dw_1_output_quantized_castMul_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___0___blocks_1_1_torch_nn_modules_conv_Conv2d_getattr_getattr_L__self___stages___0___blocks___1___conv_dw_1_0_Conv_0_quant_output_scale_mulAdd_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___0___blocks_1_1_getattr_getattr_l__self___stages___0___blocks___1___conv_dw_1_bias_addTranspose_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___0___blocks_1_1_Transpose_1LayerNormalization_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___0___blocks_1_1_timm_layers_norm_LayerNorm_getattr_getattr_L__self___stages___0___blocks___1___norm_1_2_LayerNormalization_0float32[96]Scale〈96〉float32[96]B〈96〉Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___1___mlp_fc1_1_Reshape_23int64[4]shape〈4〉ReduceMin_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___1___mlp_fc1_1_aten_amin_25_n0int64[1]axes〈1〉ReduceMax_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___1___mlp_fc1_1_aten_amax_27_n0int64[1]axes〈1〉Min_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___1___mlp_fc1_1_aten_minimum_32_n0〈…〉Max_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___1___mlp_fc1_1_aten_maximum_37_n0〈…〉Neg_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___1___mlp_fc1_1_aten_neg_38_n0Max_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___1___mlp_fc1_1_aten_maximum_39_n0Div_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___1___mlp_fc1_1_aten_div_41_n0float32B = 127Max_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___1___mlp_fc1_1_Max_48〈…〉Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___1___mlp_fc1_1_Reshape_54int64[4]shape〈4〉Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___1___mlp_fc1_1_Reshape_80int64[2]shape〈2〉Reciprocal_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___1___mlp_fc1_1_aten_reciprocal_58_n0Expand_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___1___mlp_fc1_1_aten_expand_82_n2int64[2]shape〈2〉Mul_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___1___mlp_fc1_1_Mul_61Round_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___1___mlp_fc1_1_Round_62Add_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___1___mlp_fc1_1_Add_63float32[1,75,75,1]B〈1×75×75×1〉Max_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___1___mlp_fc1_1_Max_68〈…〉Min_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___1___mlp_fc1_1_Min_69〈…〉Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___1___mlp_fc1_1_Reshape_72int64[4]shape〈4〉Cast_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___1___mlp_fc1_1_Cast_73Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___1___mlp_fc1_1_Reshape_77int64[2]shape〈2〉Cast_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___1___mlp_fc1_1_Cast_83DynamicQuantizeLinear_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___1___mlp_fc1_1__to_copy_9_QuantizeLinearMul_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___1___mlp_fc1_1_MatMul_85_quant_scales_mulfloat32B = 1MatMulInteger_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___1___mlp_fc1_1_MatMul_85_quantuint8[96,384]B〈96×384〉uint8b_zero_point = 128Cast_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___1___mlp_fc1_1_mm_2_output_quantized_castMul_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___1___mlp_fc1_1_MatMul_85_quant_output_scale_mulCast_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___1___mlp_fc1_1_Cast_86Cast_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___1___mlp_fc1_1_Cast_87Mul_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___1___mlp_fc1_1_Mul_88Mul_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___1___mlp_fc1_1_Mul_89float32[384]B〈384〉Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___1___mlp_fc1_1_Reshape_92int64[4]shape〈4〉Add_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___1___mlp_fc1_1_Add_93float32[384]B〈384〉Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___1___mlp_fc1_1_Reshape_96int64[2]shape〈2〉Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___1___mlp_fc1_1_Reshape_99int64[4]shape〈4〉Div_inlfunc__aten_gelu_approximate_none|folded_1_n2float32B = 1.41421353…Erf_inlfunc__aten_gelu_approximate_none|folded_1_n3Add_inlfunc__aten_gelu_approximate_none|folded_1_n6float32B = 1Mul_inlfunc__aten_gelu_approximate_none|folded_1_n7Mul_inlfunc__aten_gelu_approximate_none|folded_1_n10float32A = 0.5Abs_inlfunc__aten_linalg_vector_norm_onnx|folded_1_n4ReduceL2_inlfunc__aten_linalg_vector_norm_onnx|folded_1_n8_n1_n3_n3_n3_n0int64[2]axes〈2〉ReduceMean_inlfunc_timm_layers_grn_GlobalResponseNorm_getattr_getattr_L__self___stages___0___blocks___1___mlp_grn_1_ReduceMean_9int64[1]axes〈1〉Add_inlfunc_timm_layers_grn_GlobalResponseNorm_getattr_getattr_L__self___stages___0___blocks___1___mlp_grn_1_Add_11float32B = 9.99999997…Div_inlfunc_timm_layers_grn_GlobalResponseNorm_getattr_getattr_L__self___stages___0___blocks___1___mlp_grn_1_aten_div_12_n0Mul_inlfunc_timm_layers_grn_GlobalResponseNorm_getattr_getattr_L__self___stages___0___blocks___1___mlp_grn_1_Mul_19Mul_inlfunc_aten_addcmul|folded_1_n3float32[1,1,1,384]A〈1×1×1×384〉Add_inlfunc_aten_addcmul|folded_1_n4float32[1,1,1,384]A〈1×1×1×384〉Add_inlfunc_timm_layers_grn_GlobalResponseNorm_getattr_getattr_L__self___stages___0___blocks___1___mlp_grn_1_Add_21Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___1___mlp_fc2_1_Reshape_23int64[4]shape〈4〉ReduceMin_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___1___mlp_fc2_1_aten_amin_25_n0int64[1]axes〈1〉ReduceMax_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___1___mlp_fc2_1_aten_amax_27_n0int64[1]axes〈1〉Min_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___1___mlp_fc2_1_aten_minimum_32_n0〈…〉Max_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___1___mlp_fc2_1_aten_maximum_37_n0〈…〉Neg_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___1___mlp_fc2_1_aten_neg_38_n0Max_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___1___mlp_fc2_1_aten_maximum_39_n0Div_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___1___mlp_fc2_1_aten_div_41_n0float32B = 127Max_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___1___mlp_fc2_1_Max_48〈…〉Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___1___mlp_fc2_1_Reshape_54int64[4]shape〈4〉Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___1___mlp_fc2_1_Reshape_80int64[2]shape〈2〉Reciprocal_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___1___mlp_fc2_1_aten_reciprocal_58_n0Expand_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___1___mlp_fc2_1_aten_expand_82_n2int64[2]shape〈2〉Mul_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___1___mlp_fc2_1_Mul_61Round_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___1___mlp_fc2_1_Round_62Add_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___1___mlp_fc2_1_Add_63float32[1,75,75,1]B〈1×75×75×1〉Max_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___1___mlp_fc2_1_Max_68〈…〉Min_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___1___mlp_fc2_1_Min_69〈…〉Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___1___mlp_fc2_1_Reshape_72int64[4]shape〈4〉Cast_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___1___mlp_fc2_1_Cast_73Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___1___mlp_fc2_1_Reshape_77int64[2]shape〈2〉Cast_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___1___mlp_fc2_1_Cast_83DynamicQuantizeLinear_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___1___mlp_fc2_1__to_copy_13_QuantizeLinearMul_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___1___mlp_fc2_1_MatMul_85_quant_scales_mulfloat32B = 1MatMulInteger_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___1___mlp_fc2_1_MatMul_85_quantuint8[384,96]B〈384×96〉uint8b_zero_point = 128Cast_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___1___mlp_fc2_1_mm_3_output_quantized_castMul_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___1___mlp_fc2_1_MatMul_85_quant_output_scale_mulCast_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___1___mlp_fc2_1_Cast_86Cast_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___1___mlp_fc2_1_Cast_87Mul_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___1___mlp_fc2_1_Mul_88Mul_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___1___mlp_fc2_1_Mul_89float32[96]B〈96〉Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___1___mlp_fc2_1_Reshape_92int64[4]shape〈4〉Add_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___1___mlp_fc2_1_Add_93float32[96]B〈96〉Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___1___mlp_fc2_1_Reshape_96int64[2]shape〈2〉Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___1___mlp_fc2_1_Reshape_99int64[4]shape〈4〉Add_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___0___blocks_1_1_Add_5TransposeTranspose_token_0DynamicQuantizeLinear_inlfunc_torch_nn_modules_container_Sequential_getattr_L__self___stages___0___blocks_1_getattr_l__self___stages___0___blocks_1_1_QuantizeLinearMul_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___0___blocks_2_1_torch_nn_modules_conv_Conv2d_getattr_getattr_L__self___stages___0___blocks___2___conv_dw_1_0_Conv_0_quant_scales_mulfloat32B = 0.00309308…ConvInteger_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___0___blocks_2_1_torch_nn_modules_conv_Conv2d_getattr_getattr_L__self___stages___0___blocks___2___conv_dw_1_0_Conv_0_quantuint8[96,1,7,7]w〈96×1×7×7〉uint8w_zero_point = 123Cast_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___0___blocks_2_1_getattr_getattr_l__self___stages___0___blocks___2___conv_dw_1_output_quantized_castMul_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___0___blocks_2_1_torch_nn_modules_conv_Conv2d_getattr_getattr_L__self___stages___0___blocks___2___conv_dw_1_0_Conv_0_quant_output_scale_mulAdd_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___0___blocks_2_1_getattr_getattr_l__self___stages___0___blocks___2___conv_dw_1_bias_addTranspose_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___0___blocks_2_1_Transpose_1LayerNormalization_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___0___blocks_2_1_timm_layers_norm_LayerNorm_getattr_getattr_L__self___stages___0___blocks___2___norm_1_2_LayerNormalization_0float32[96]Scale〈96〉float32[96]B〈96〉Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___2___mlp_fc1_1_Reshape_23int64[4]shape〈4〉ReduceMin_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___2___mlp_fc1_1_aten_amin_25_n0int64[1]axes〈1〉ReduceMax_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___2___mlp_fc1_1_aten_amax_27_n0int64[1]axes〈1〉Min_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___2___mlp_fc1_1_aten_minimum_32_n0〈…〉Max_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___2___mlp_fc1_1_aten_maximum_37_n0〈…〉Neg_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___2___mlp_fc1_1_aten_neg_38_n0Max_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___2___mlp_fc1_1_aten_maximum_39_n0Div_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___2___mlp_fc1_1_aten_div_41_n0float32B = 127Max_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___2___mlp_fc1_1_Max_48〈…〉Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___2___mlp_fc1_1_Reshape_54int64[4]shape〈4〉Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___2___mlp_fc1_1_Reshape_80int64[2]shape〈2〉Reciprocal_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___2___mlp_fc1_1_aten_reciprocal_58_n0Expand_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___2___mlp_fc1_1_aten_expand_82_n2int64[2]shape〈2〉Mul_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___2___mlp_fc1_1_Mul_61Round_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___2___mlp_fc1_1_Round_62Add_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___2___mlp_fc1_1_Add_63float32[1,75,75,1]B〈1×75×75×1〉Max_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___2___mlp_fc1_1_Max_68〈…〉Min_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___2___mlp_fc1_1_Min_69〈…〉Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___2___mlp_fc1_1_Reshape_72int64[4]shape〈4〉Cast_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___2___mlp_fc1_1_Cast_73Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___2___mlp_fc1_1_Reshape_77int64[2]shape〈2〉Cast_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___2___mlp_fc1_1_Cast_83DynamicQuantizeLinear_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___2___mlp_fc1_1__to_copy_17_QuantizeLinearMul_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___2___mlp_fc1_1_MatMul_85_quant_scales_mulfloat32B = 1MatMulInteger_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___2___mlp_fc1_1_MatMul_85_quantuint8[96,384]B〈96×384〉uint8b_zero_point = 128Cast_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___2___mlp_fc1_1_mm_4_output_quantized_castMul_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___2___mlp_fc1_1_MatMul_85_quant_output_scale_mulCast_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___2___mlp_fc1_1_Cast_86Cast_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___2___mlp_fc1_1_Cast_87Mul_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___2___mlp_fc1_1_Mul_88Mul_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___2___mlp_fc1_1_Mul_89float32[384]B〈384〉Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___2___mlp_fc1_1_Reshape_92int64[4]shape〈4〉Add_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___2___mlp_fc1_1_Add_93float32[384]B〈384〉Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___2___mlp_fc1_1_Reshape_96int64[2]shape〈2〉Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___2___mlp_fc1_1_Reshape_99int64[4]shape〈4〉Div_inlfunc__aten_gelu_approximate_none|folded_2_n2float32B = 1.41421353…Erf_inlfunc__aten_gelu_approximate_none|folded_2_n3Add_inlfunc__aten_gelu_approximate_none|folded_2_n6float32B = 1Mul_inlfunc__aten_gelu_approximate_none|folded_2_n7Mul_inlfunc__aten_gelu_approximate_none|folded_2_n10float32A = 0.5Abs_inlfunc__aten_linalg_vector_norm_onnx|folded_2_n4ReduceL2_inlfunc__aten_linalg_vector_norm_onnx|folded_2_n8_n1_n3_n3_n3_n0int64[2]axes〈2〉ReduceMean_inlfunc_timm_layers_grn_GlobalResponseNorm_getattr_getattr_L__self___stages___0___blocks___2___mlp_grn_1_ReduceMean_9int64[1]axes〈1〉Add_inlfunc_timm_layers_grn_GlobalResponseNorm_getattr_getattr_L__self___stages___0___blocks___2___mlp_grn_1_Add_11float32B = 9.99999997…Div_inlfunc_timm_layers_grn_GlobalResponseNorm_getattr_getattr_L__self___stages___0___blocks___2___mlp_grn_1_aten_div_12_n0Mul_inlfunc_timm_layers_grn_GlobalResponseNorm_getattr_getattr_L__self___stages___0___blocks___2___mlp_grn_1_Mul_19Mul_inlfunc_aten_addcmul|folded_2_n3float32[1,1,1,384]A〈1×1×1×384〉Add_inlfunc_aten_addcmul|folded_2_n4float32[1,1,1,384]A〈1×1×1×384〉Add_inlfunc_timm_layers_grn_GlobalResponseNorm_getattr_getattr_L__self___stages___0___blocks___2___mlp_grn_1_Add_21Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___2___mlp_fc2_1_Reshape_23int64[4]shape〈4〉ReduceMin_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___2___mlp_fc2_1_aten_amin_25_n0int64[1]axes〈1〉ReduceMax_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___2___mlp_fc2_1_aten_amax_27_n0int64[1]axes〈1〉Min_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___2___mlp_fc2_1_aten_minimum_32_n0〈…〉Max_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___2___mlp_fc2_1_aten_maximum_37_n0〈…〉Neg_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___2___mlp_fc2_1_aten_neg_38_n0Max_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___2___mlp_fc2_1_aten_maximum_39_n0Div_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___2___mlp_fc2_1_aten_div_41_n0float32B = 127Max_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___2___mlp_fc2_1_Max_48〈…〉Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___2___mlp_fc2_1_Reshape_54int64[4]shape〈4〉Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___2___mlp_fc2_1_Reshape_80int64[2]shape〈2〉Reciprocal_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___2___mlp_fc2_1_aten_reciprocal_58_n0Expand_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___2___mlp_fc2_1_aten_expand_82_n2int64[2]shape〈2〉Mul_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___2___mlp_fc2_1_Mul_61Round_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___2___mlp_fc2_1_Round_62Add_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___2___mlp_fc2_1_Add_63float32[1,75,75,1]B〈1×75×75×1〉Max_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___2___mlp_fc2_1_Max_68〈…〉Min_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___2___mlp_fc2_1_Min_69〈…〉Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___2___mlp_fc2_1_Reshape_72int64[4]shape〈4〉Cast_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___2___mlp_fc2_1_Cast_73Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___2___mlp_fc2_1_Reshape_77int64[2]shape〈2〉Cast_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___2___mlp_fc2_1_Cast_83DynamicQuantizeLinear_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___2___mlp_fc2_1__to_copy_21_QuantizeLinearMul_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___2___mlp_fc2_1_MatMul_85_quant_scales_mulfloat32B = 1MatMulInteger_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___2___mlp_fc2_1_MatMul_85_quantuint8[384,96]B〈384×96〉uint8b_zero_point = 128Cast_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___2___mlp_fc2_1_mm_5_output_quantized_castMul_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___2___mlp_fc2_1_MatMul_85_quant_output_scale_mulCast_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___2___mlp_fc2_1_Cast_86Cast_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___2___mlp_fc2_1_Cast_87Mul_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___2___mlp_fc2_1_Mul_88Mul_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___2___mlp_fc2_1_Mul_89float32[96]B〈96〉Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___2___mlp_fc2_1_Reshape_92int64[4]shape〈4〉Add_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___2___mlp_fc2_1_Add_93float32[96]B〈96〉Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___2___mlp_fc2_1_Reshape_96int64[2]shape〈2〉Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___2___mlp_fc2_1_Reshape_99int64[4]shape〈4〉Add_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___0___blocks_2_1_Add_5LayerNormalization_inlfunc_timm_layers_norm_LayerNorm2d_getattr_L__self___stages___1___downsample_0_1_LayerNormalization_1float32[96]Scale〈96〉float32[96]B〈96〉Transpose_inlfunc_timm_layers_norm_LayerNorm2d_getattr_L__self___stages___1___downsample_0_1_Transpose_2DynamicQuantizeLinear_inlfunc_timm_models_convnext_ConvNeXtStage_stages_1_1_torch_nn_modules_container_Sequential_getattr_L__self___stages___1___downsample_1_0_getattr_l__self___stages___1___downsample_0_1_QuantizeLinearMul_inlfunc_timm_models_convnext_ConvNeXtStage_stages_1_1_torch_nn_modules_container_Sequential_getattr_L__self___stages___1___downsample_1_0_torch_nn_modules_conv_Conv2d_getattr_L__self___stages___1___downsample_1_1_1_Conv_0_quant_scales_mulfloat32B = 0.00430828…ConvInteger_inlfunc_timm_models_convnext_ConvNeXtStage_stages_1_1_torch_nn_modules_container_Sequential_getattr_L__self___stages___1___downsample_1_0_torch_nn_modules_conv_Conv2d_getattr_L__self___stages___1___downsample_1_1_1_Conv_0_quantuint8[192,96,2,2]w〈192×96×2×2〉uint8w_zero_point = 139Cast_inlfunc_timm_models_convnext_ConvNeXtStage_stages_1_1_getattr_l__self___stages___1___downsample_1_output_quantized_castMul_inlfunc_timm_models_convnext_ConvNeXtStage_stages_1_1_torch_nn_modules_container_Sequential_getattr_L__self___stages___1___downsample_1_0_torch_nn_modules_conv_Conv2d_getattr_L__self___stages___1___downsample_1_1_1_Conv_0_quant_output_scale_mulAdd_inlfunc_timm_models_convnext_ConvNeXtStage_stages_1_1_getattr_l__self___stages___1___downsample_1_bias_addDynamicQuantizeLinear_inlfunc_timm_models_convnext_ConvNeXtStage_stages_1_1_getattr_l__self___stages___1___downsample_1_QuantizeLinearMul_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___1___blocks_0_1_torch_nn_modules_conv_Conv2d_getattr_getattr_L__self___stages___1___blocks___0___conv_dw_1_0_Conv_0_quant_scales_mulfloat32B = 0.00367831…ConvInteger_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___1___blocks_0_1_torch_nn_modules_conv_Conv2d_getattr_getattr_L__self___stages___1___blocks___0___conv_dw_1_0_Conv_0_quantuint8[192,1,7,7]w〈192×1×7×7〉uint8w_zero_point = 129Cast_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___1___blocks_0_1_getattr_getattr_l__self___stages___1___blocks___0___conv_dw_1_output_quantized_castMul_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___1___blocks_0_1_torch_nn_modules_conv_Conv2d_getattr_getattr_L__self___stages___1___blocks___0___conv_dw_1_0_Conv_0_quant_output_scale_mulAdd_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___1___blocks_0_1_getattr_getattr_l__self___stages___1___blocks___0___conv_dw_1_bias_addTranspose_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___1___blocks_0_1_Transpose_1LayerNormalization_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___1___blocks_0_1_timm_layers_norm_LayerNorm_getattr_getattr_L__self___stages___1___blocks___0___norm_1_2_LayerNormalization_0float32[192]Scale〈192〉float32[192]B〈192〉Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___0___mlp_fc1_1_Reshape_23int64[4]shape〈4〉ReduceMin_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___0___mlp_fc1_1_aten_amin_25_n0int64[1]axes〈1〉ReduceMax_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___0___mlp_fc1_1_aten_amax_27_n0int64[1]axes〈1〉Min_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___0___mlp_fc1_1_aten_minimum_32_n0〈…〉Max_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___0___mlp_fc1_1_aten_maximum_37_n0〈…〉Neg_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___0___mlp_fc1_1_aten_neg_38_n0Max_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___0___mlp_fc1_1_aten_maximum_39_n0Div_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___0___mlp_fc1_1_aten_div_41_n0float32B = 127Max_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___0___mlp_fc1_1_Max_48〈…〉Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___0___mlp_fc1_1_Reshape_54int64[4]shape〈4〉Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___0___mlp_fc1_1_Reshape_80int64[2]shape〈2〉Reciprocal_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___0___mlp_fc1_1_aten_reciprocal_58_n0Expand_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___0___mlp_fc1_1_aten_expand_82_n2int64[2]shape〈2〉Mul_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___0___mlp_fc1_1_Mul_61Round_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___0___mlp_fc1_1_Round_62Add_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___0___mlp_fc1_1_Add_63float32[1,37,37,1]B〈1×37×37×1〉Max_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___0___mlp_fc1_1_Max_68〈…〉Min_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___0___mlp_fc1_1_Min_69〈…〉Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___0___mlp_fc1_1_Reshape_72int64[4]shape〈4〉Cast_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___0___mlp_fc1_1_Cast_73Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___0___mlp_fc1_1_Reshape_77int64[2]shape〈2〉Cast_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___0___mlp_fc1_1_Cast_83DynamicQuantizeLinear_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___0___mlp_fc1_1__to_copy_25_QuantizeLinearMul_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___0___mlp_fc1_1_MatMul_85_quant_scales_mulfloat32B = 1MatMulInteger_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___0___mlp_fc1_1_MatMul_85_quantuint8[192,768]B〈192×768〉uint8b_zero_point = 128Cast_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___0___mlp_fc1_1_mm_6_output_quantized_castMul_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___0___mlp_fc1_1_MatMul_85_quant_output_scale_mulCast_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___0___mlp_fc1_1_Cast_86Cast_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___0___mlp_fc1_1_Cast_87Mul_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___0___mlp_fc1_1_Mul_88Mul_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___0___mlp_fc1_1_Mul_89float32[768]B〈768〉Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___0___mlp_fc1_1_Reshape_92int64[4]shape〈4〉Add_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___0___mlp_fc1_1_Add_93float32[768]B〈768〉Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___0___mlp_fc1_1_Reshape_96int64[2]shape〈2〉Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___0___mlp_fc1_1_Reshape_99int64[4]shape〈4〉Div_inlfunc__aten_gelu_approximate_none|folded_3_n2float32B = 1.41421353…Erf_inlfunc__aten_gelu_approximate_none|folded_3_n3Add_inlfunc__aten_gelu_approximate_none|folded_3_n6float32B = 1Mul_inlfunc__aten_gelu_approximate_none|folded_3_n7Mul_inlfunc__aten_gelu_approximate_none|folded_3_n10float32A = 0.5Abs_inlfunc__aten_linalg_vector_norm_onnx|folded_3_n4ReduceL2_inlfunc__aten_linalg_vector_norm_onnx|folded_3_n8_n1_n3_n3_n3_n0int64[2]axes〈2〉ReduceMean_inlfunc_timm_layers_grn_GlobalResponseNorm_getattr_getattr_L__self___stages___1___blocks___0___mlp_grn_1_ReduceMean_9int64[1]axes〈1〉Add_inlfunc_timm_layers_grn_GlobalResponseNorm_getattr_getattr_L__self___stages___1___blocks___0___mlp_grn_1_Add_11float32B = 9.99999997…Div_inlfunc_timm_layers_grn_GlobalResponseNorm_getattr_getattr_L__self___stages___1___blocks___0___mlp_grn_1_aten_div_12_n0Mul_inlfunc_timm_layers_grn_GlobalResponseNorm_getattr_getattr_L__self___stages___1___blocks___0___mlp_grn_1_Mul_19Mul_inlfunc_aten_addcmul|folded_3_n3float32[1,1,1,768]A〈1×1×1×768〉Add_inlfunc_aten_addcmul|folded_3_n4float32[1,1,1,768]A〈1×1×1×768〉Add_inlfunc_timm_layers_grn_GlobalResponseNorm_getattr_getattr_L__self___stages___1___blocks___0___mlp_grn_1_Add_21Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___0___mlp_fc2_1_Reshape_23int64[4]shape〈4〉ReduceMin_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___0___mlp_fc2_1_aten_amin_25_n0int64[1]axes〈1〉ReduceMax_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___0___mlp_fc2_1_aten_amax_27_n0int64[1]axes〈1〉Min_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___0___mlp_fc2_1_aten_minimum_32_n0〈…〉Max_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___0___mlp_fc2_1_aten_maximum_37_n0〈…〉Neg_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___0___mlp_fc2_1_aten_neg_38_n0Max_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___0___mlp_fc2_1_aten_maximum_39_n0Div_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___0___mlp_fc2_1_aten_div_41_n0float32B = 127Max_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___0___mlp_fc2_1_Max_48〈…〉Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___0___mlp_fc2_1_Reshape_54int64[4]shape〈4〉Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___0___mlp_fc2_1_Reshape_80int64[2]shape〈2〉Reciprocal_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___0___mlp_fc2_1_aten_reciprocal_58_n0Expand_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___0___mlp_fc2_1_aten_expand_82_n2int64[2]shape〈2〉Mul_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___0___mlp_fc2_1_Mul_61Round_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___0___mlp_fc2_1_Round_62Add_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___0___mlp_fc2_1_Add_63float32[1,37,37,1]B〈1×37×37×1〉Max_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___0___mlp_fc2_1_Max_68〈…〉Min_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___0___mlp_fc2_1_Min_69〈…〉Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___0___mlp_fc2_1_Reshape_72int64[4]shape〈4〉Cast_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___0___mlp_fc2_1_Cast_73Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___0___mlp_fc2_1_Reshape_77int64[2]shape〈2〉Cast_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___0___mlp_fc2_1_Cast_83DynamicQuantizeLinear_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___0___mlp_fc2_1__to_copy_29_QuantizeLinearMul_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___0___mlp_fc2_1_MatMul_85_quant_scales_mulfloat32B = 1MatMulInteger_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___0___mlp_fc2_1_MatMul_85_quantuint8[768,192]B〈768×192〉uint8b_zero_point = 128Cast_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___0___mlp_fc2_1_mm_7_output_quantized_castMul_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___0___mlp_fc2_1_MatMul_85_quant_output_scale_mulCast_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___0___mlp_fc2_1_Cast_86Cast_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___0___mlp_fc2_1_Cast_87Mul_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___0___mlp_fc2_1_Mul_88Mul_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___0___mlp_fc2_1_Mul_89float32[192]B〈192〉Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___0___mlp_fc2_1_Reshape_92int64[4]shape〈4〉Add_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___0___mlp_fc2_1_Add_93float32[192]B〈192〉Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___0___mlp_fc2_1_Reshape_96int64[2]shape〈2〉Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___0___mlp_fc2_1_Reshape_99int64[4]shape〈4〉Transpose_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___1___blocks_0_1_Transpose_4Add_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___1___blocks_0_1_Add_5DynamicQuantizeLinear_inlfunc_torch_nn_modules_container_Sequential_getattr_L__self___stages___1___blocks_1_getattr_l__self___stages___1___blocks_0_1_QuantizeLinearMul_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___1___blocks_1_1_torch_nn_modules_conv_Conv2d_getattr_getattr_L__self___stages___1___blocks___1___conv_dw_1_0_Conv_0_quant_scales_mulfloat32B = 0.00328640…ConvInteger_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___1___blocks_1_1_torch_nn_modules_conv_Conv2d_getattr_getattr_L__self___stages___1___blocks___1___conv_dw_1_0_Conv_0_quantuint8[192,1,7,7]w〈192×1×7×7〉uint8w_zero_point = 132Cast_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___1___blocks_1_1_getattr_getattr_l__self___stages___1___blocks___1___conv_dw_1_output_quantized_castMul_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___1___blocks_1_1_torch_nn_modules_conv_Conv2d_getattr_getattr_L__self___stages___1___blocks___1___conv_dw_1_0_Conv_0_quant_output_scale_mulAdd_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___1___blocks_1_1_getattr_getattr_l__self___stages___1___blocks___1___conv_dw_1_bias_addTranspose_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___1___blocks_1_1_Transpose_1LayerNormalization_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___1___blocks_1_1_timm_layers_norm_LayerNorm_getattr_getattr_L__self___stages___1___blocks___1___norm_1_2_LayerNormalization_0float32[192]Scale〈192〉float32[192]B〈192〉Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___1___mlp_fc1_1_Reshape_23int64[4]shape〈4〉ReduceMin_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___1___mlp_fc1_1_aten_amin_25_n0int64[1]axes〈1〉ReduceMax_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___1___mlp_fc1_1_aten_amax_27_n0int64[1]axes〈1〉Min_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___1___mlp_fc1_1_aten_minimum_32_n0〈…〉Max_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___1___mlp_fc1_1_aten_maximum_37_n0〈…〉Neg_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___1___mlp_fc1_1_aten_neg_38_n0Max_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___1___mlp_fc1_1_aten_maximum_39_n0Div_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___1___mlp_fc1_1_aten_div_41_n0float32B = 127Max_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___1___mlp_fc1_1_Max_48〈…〉Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___1___mlp_fc1_1_Reshape_54int64[4]shape〈4〉Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___1___mlp_fc1_1_Reshape_80int64[2]shape〈2〉Reciprocal_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___1___mlp_fc1_1_aten_reciprocal_58_n0Expand_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___1___mlp_fc1_1_aten_expand_82_n2int64[2]shape〈2〉Mul_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___1___mlp_fc1_1_Mul_61Round_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___1___mlp_fc1_1_Round_62Add_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___1___mlp_fc1_1_Add_63float32[1,37,37,1]B〈1×37×37×1〉Max_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___1___mlp_fc1_1_Max_68〈…〉Min_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___1___mlp_fc1_1_Min_69〈…〉Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___1___mlp_fc1_1_Reshape_72int64[4]shape〈4〉Cast_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___1___mlp_fc1_1_Cast_73Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___1___mlp_fc1_1_Reshape_77int64[2]shape〈2〉Cast_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___1___mlp_fc1_1_Cast_83DynamicQuantizeLinear_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___1___mlp_fc1_1__to_copy_33_QuantizeLinearMul_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___1___mlp_fc1_1_MatMul_85_quant_scales_mulfloat32B = 1MatMulInteger_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___1___mlp_fc1_1_MatMul_85_quantuint8[192,768]B〈192×768〉uint8b_zero_point = 128Cast_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___1___mlp_fc1_1_mm_8_output_quantized_castMul_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___1___mlp_fc1_1_MatMul_85_quant_output_scale_mulCast_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___1___mlp_fc1_1_Cast_86Cast_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___1___mlp_fc1_1_Cast_87Mul_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___1___mlp_fc1_1_Mul_88Mul_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___1___mlp_fc1_1_Mul_89float32[768]B〈768〉Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___1___mlp_fc1_1_Reshape_92int64[4]shape〈4〉Add_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___1___mlp_fc1_1_Add_93float32[768]B〈768〉Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___1___mlp_fc1_1_Reshape_96int64[2]shape〈2〉Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___1___mlp_fc1_1_Reshape_99int64[4]shape〈4〉Div_inlfunc__aten_gelu_approximate_none|folded_4_n2float32B = 1.41421353…Erf_inlfunc__aten_gelu_approximate_none|folded_4_n3Add_inlfunc__aten_gelu_approximate_none|folded_4_n6float32B = 1Mul_inlfunc__aten_gelu_approximate_none|folded_4_n7Mul_inlfunc__aten_gelu_approximate_none|folded_4_n10float32A = 0.5Abs_inlfunc__aten_linalg_vector_norm_onnx|folded_4_n4ReduceL2_inlfunc__aten_linalg_vector_norm_onnx|folded_4_n8_n1_n3_n3_n3_n0int64[2]axes〈2〉ReduceMean_inlfunc_timm_layers_grn_GlobalResponseNorm_getattr_getattr_L__self___stages___1___blocks___1___mlp_grn_1_ReduceMean_9int64[1]axes〈1〉Add_inlfunc_timm_layers_grn_GlobalResponseNorm_getattr_getattr_L__self___stages___1___blocks___1___mlp_grn_1_Add_11float32B = 9.99999997…Div_inlfunc_timm_layers_grn_GlobalResponseNorm_getattr_getattr_L__self___stages___1___blocks___1___mlp_grn_1_aten_div_12_n0Mul_inlfunc_timm_layers_grn_GlobalResponseNorm_getattr_getattr_L__self___stages___1___blocks___1___mlp_grn_1_Mul_19Mul_inlfunc_aten_addcmul|folded_4_n3float32[1,1,1,768]A〈1×1×1×768〉Add_inlfunc_aten_addcmul|folded_4_n4float32[1,1,1,768]A〈1×1×1×768〉Add_inlfunc_timm_layers_grn_GlobalResponseNorm_getattr_getattr_L__self___stages___1___blocks___1___mlp_grn_1_Add_21Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___1___mlp_fc2_1_Reshape_23int64[4]shape〈4〉ReduceMin_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___1___mlp_fc2_1_aten_amin_25_n0int64[1]axes〈1〉ReduceMax_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___1___mlp_fc2_1_aten_amax_27_n0int64[1]axes〈1〉Min_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___1___mlp_fc2_1_aten_minimum_32_n0〈…〉Max_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___1___mlp_fc2_1_aten_maximum_37_n0〈…〉Neg_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___1___mlp_fc2_1_aten_neg_38_n0Max_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___1___mlp_fc2_1_aten_maximum_39_n0Div_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___1___mlp_fc2_1_aten_div_41_n0float32B = 127Max_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___1___mlp_fc2_1_Max_48〈…〉Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___1___mlp_fc2_1_Reshape_54int64[4]shape〈4〉Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___1___mlp_fc2_1_Reshape_80int64[2]shape〈2〉Reciprocal_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___1___mlp_fc2_1_aten_reciprocal_58_n0Expand_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___1___mlp_fc2_1_aten_expand_82_n2int64[2]shape〈2〉Mul_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___1___mlp_fc2_1_Mul_61Round_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___1___mlp_fc2_1_Round_62Add_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___1___mlp_fc2_1_Add_63float32[1,37,37,1]B〈1×37×37×1〉Max_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___1___mlp_fc2_1_Max_68〈…〉Min_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___1___mlp_fc2_1_Min_69〈…〉Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___1___mlp_fc2_1_Reshape_72int64[4]shape〈4〉Cast_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___1___mlp_fc2_1_Cast_73Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___1___mlp_fc2_1_Reshape_77int64[2]shape〈2〉Cast_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___1___mlp_fc2_1_Cast_83DynamicQuantizeLinear_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___1___mlp_fc2_1__to_copy_37_QuantizeLinearMul_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___1___mlp_fc2_1_MatMul_85_quant_scales_mulfloat32B = 1MatMulInteger_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___1___mlp_fc2_1_MatMul_85_quantuint8[768,192]B〈768×192〉uint8b_zero_point = 128Cast_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___1___mlp_fc2_1_mm_9_output_quantized_castMul_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___1___mlp_fc2_1_MatMul_85_quant_output_scale_mulCast_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___1___mlp_fc2_1_Cast_86Cast_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___1___mlp_fc2_1_Cast_87Mul_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___1___mlp_fc2_1_Mul_88Mul_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___1___mlp_fc2_1_Mul_89float32[192]B〈192〉Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___1___mlp_fc2_1_Reshape_92int64[4]shape〈4〉Add_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___1___mlp_fc2_1_Add_93float32[192]B〈192〉Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___1___mlp_fc2_1_Reshape_96int64[2]shape〈2〉Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___1___mlp_fc2_1_Reshape_99int64[4]shape〈4〉Transpose_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___1___blocks_1_1_Transpose_4Add_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___1___blocks_1_1_Add_5DynamicQuantizeLinear_inlfunc_torch_nn_modules_container_Sequential_getattr_L__self___stages___1___blocks_1_getattr_l__self___stages___1___blocks_1_1_QuantizeLinearMul_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___1___blocks_2_1_torch_nn_modules_conv_Conv2d_getattr_getattr_L__self___stages___1___blocks___2___conv_dw_1_0_Conv_0_quant_scales_mulfloat32B = 0.00329827…ConvInteger_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___1___blocks_2_1_torch_nn_modules_conv_Conv2d_getattr_getattr_L__self___stages___1___blocks___2___conv_dw_1_0_Conv_0_quantuint8[192,1,7,7]w〈192×1×7×7〉uint8w_zero_point = 121Cast_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___1___blocks_2_1_getattr_getattr_l__self___stages___1___blocks___2___conv_dw_1_output_quantized_castMul_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___1___blocks_2_1_torch_nn_modules_conv_Conv2d_getattr_getattr_L__self___stages___1___blocks___2___conv_dw_1_0_Conv_0_quant_output_scale_mulAdd_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___1___blocks_2_1_getattr_getattr_l__self___stages___1___blocks___2___conv_dw_1_bias_addTranspose_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___1___blocks_2_1_Transpose_1LayerNormalization_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___1___blocks_2_1_timm_layers_norm_LayerNorm_getattr_getattr_L__self___stages___1___blocks___2___norm_1_2_LayerNormalization_0float32[192]Scale〈192〉float32[192]B〈192〉Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___2___mlp_fc1_1_Reshape_23int64[4]shape〈4〉ReduceMin_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___2___mlp_fc1_1_aten_amin_25_n0int64[1]axes〈1〉ReduceMax_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___2___mlp_fc1_1_aten_amax_27_n0int64[1]axes〈1〉Min_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___2___mlp_fc1_1_aten_minimum_32_n0〈…〉Max_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___2___mlp_fc1_1_aten_maximum_37_n0〈…〉Neg_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___2___mlp_fc1_1_aten_neg_38_n0Max_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___2___mlp_fc1_1_aten_maximum_39_n0Div_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___2___mlp_fc1_1_aten_div_41_n0float32B = 127Max_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___2___mlp_fc1_1_Max_48〈…〉Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___2___mlp_fc1_1_Reshape_54int64[4]shape〈4〉Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___2___mlp_fc1_1_Reshape_80int64[2]shape〈2〉Reciprocal_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___2___mlp_fc1_1_aten_reciprocal_58_n0Expand_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___2___mlp_fc1_1_aten_expand_82_n2int64[2]shape〈2〉Mul_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___2___mlp_fc1_1_Mul_61Round_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___2___mlp_fc1_1_Round_62Add_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___2___mlp_fc1_1_Add_63float32[1,37,37,1]B〈1×37×37×1〉Max_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___2___mlp_fc1_1_Max_68〈…〉Min_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___2___mlp_fc1_1_Min_69〈…〉Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___2___mlp_fc1_1_Reshape_72int64[4]shape〈4〉Cast_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___2___mlp_fc1_1_Cast_73Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___2___mlp_fc1_1_Reshape_77int64[2]shape〈2〉Cast_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___2___mlp_fc1_1_Cast_83DynamicQuantizeLinear_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___2___mlp_fc1_1__to_copy_41_QuantizeLinearMul_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___2___mlp_fc1_1_MatMul_85_quant_scales_mulfloat32B = 1MatMulInteger_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___2___mlp_fc1_1_MatMul_85_quantuint8[192,768]B〈192×768〉uint8b_zero_point = 128Cast_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___2___mlp_fc1_1_mm_10_output_quantized_castMul_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___2___mlp_fc1_1_MatMul_85_quant_output_scale_mulCast_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___2___mlp_fc1_1_Cast_86Cast_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___2___mlp_fc1_1_Cast_87Mul_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___2___mlp_fc1_1_Mul_88Mul_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___2___mlp_fc1_1_Mul_89float32[768]B〈768〉Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___2___mlp_fc1_1_Reshape_92int64[4]shape〈4〉Add_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___2___mlp_fc1_1_Add_93float32[768]B〈768〉Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___2___mlp_fc1_1_Reshape_96int64[2]shape〈2〉Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___2___mlp_fc1_1_Reshape_99int64[4]shape〈4〉Div_inlfunc__aten_gelu_approximate_none|folded_5_n2float32B = 1.41421353…Erf_inlfunc__aten_gelu_approximate_none|folded_5_n3Add_inlfunc__aten_gelu_approximate_none|folded_5_n6float32B = 1Mul_inlfunc__aten_gelu_approximate_none|folded_5_n7Mul_inlfunc__aten_gelu_approximate_none|folded_5_n10float32A = 0.5Abs_inlfunc__aten_linalg_vector_norm_onnx|folded_5_n4ReduceL2_inlfunc__aten_linalg_vector_norm_onnx|folded_5_n8_n1_n3_n3_n3_n0int64[2]axes〈2〉ReduceMean_inlfunc_timm_layers_grn_GlobalResponseNorm_getattr_getattr_L__self___stages___1___blocks___2___mlp_grn_1_ReduceMean_9int64[1]axes〈1〉Add_inlfunc_timm_layers_grn_GlobalResponseNorm_getattr_getattr_L__self___stages___1___blocks___2___mlp_grn_1_Add_11float32B = 9.99999997…Div_inlfunc_timm_layers_grn_GlobalResponseNorm_getattr_getattr_L__self___stages___1___blocks___2___mlp_grn_1_aten_div_12_n0Mul_inlfunc_timm_layers_grn_GlobalResponseNorm_getattr_getattr_L__self___stages___1___blocks___2___mlp_grn_1_Mul_19Mul_inlfunc_aten_addcmul|folded_5_n3float32[1,1,1,768]A〈1×1×1×768〉Add_inlfunc_aten_addcmul|folded_5_n4float32[1,1,1,768]A〈1×1×1×768〉Add_inlfunc_timm_layers_grn_GlobalResponseNorm_getattr_getattr_L__self___stages___1___blocks___2___mlp_grn_1_Add_21Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___2___mlp_fc2_1_Reshape_23int64[4]shape〈4〉ReduceMin_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___2___mlp_fc2_1_aten_amin_25_n0int64[1]axes〈1〉ReduceMax_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___2___mlp_fc2_1_aten_amax_27_n0int64[1]axes〈1〉Min_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___2___mlp_fc2_1_aten_minimum_32_n0〈…〉Max_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___2___mlp_fc2_1_aten_maximum_37_n0〈…〉Neg_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___2___mlp_fc2_1_aten_neg_38_n0Max_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___2___mlp_fc2_1_aten_maximum_39_n0Div_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___2___mlp_fc2_1_aten_div_41_n0float32B = 127Max_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___2___mlp_fc2_1_Max_48〈…〉Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___2___mlp_fc2_1_Reshape_54int64[4]shape〈4〉Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___2___mlp_fc2_1_Reshape_80int64[2]shape〈2〉Reciprocal_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___2___mlp_fc2_1_aten_reciprocal_58_n0Expand_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___2___mlp_fc2_1_aten_expand_82_n2int64[2]shape〈2〉Mul_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___2___mlp_fc2_1_Mul_61Round_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___2___mlp_fc2_1_Round_62Add_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___2___mlp_fc2_1_Add_63float32[1,37,37,1]B〈1×37×37×1〉Max_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___2___mlp_fc2_1_Max_68〈…〉Min_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___2___mlp_fc2_1_Min_69〈…〉Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___2___mlp_fc2_1_Reshape_72int64[4]shape〈4〉Cast_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___2___mlp_fc2_1_Cast_73Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___2___mlp_fc2_1_Reshape_77int64[2]shape〈2〉Cast_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___2___mlp_fc2_1_Cast_83DynamicQuantizeLinear_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___2___mlp_fc2_1__to_copy_45_QuantizeLinearMul_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___2___mlp_fc2_1_MatMul_85_quant_scales_mulfloat32B = 1MatMulInteger_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___2___mlp_fc2_1_MatMul_85_quantuint8[768,192]B〈768×192〉uint8b_zero_point = 128Cast_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___2___mlp_fc2_1_mm_11_output_quantized_castMul_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___2___mlp_fc2_1_MatMul_85_quant_output_scale_mulCast_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___2___mlp_fc2_1_Cast_86Cast_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___2___mlp_fc2_1_Cast_87Mul_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___2___mlp_fc2_1_Mul_88Mul_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___2___mlp_fc2_1_Mul_89float32[192]B〈192〉Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___2___mlp_fc2_1_Reshape_92int64[4]shape〈4〉Add_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___2___mlp_fc2_1_Add_93float32[192]B〈192〉Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___2___mlp_fc2_1_Reshape_96int64[2]shape〈2〉Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___2___mlp_fc2_1_Reshape_99int64[4]shape〈4〉Transpose_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___1___blocks_2_1_Transpose_4Add_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___1___blocks_2_1_Add_5Transpose_inlfunc_timm_layers_norm_LayerNorm2d_getattr_L__self___stages___2___downsample_0_1_Transpose_0LayerNormalization_inlfunc_timm_layers_norm_LayerNorm2d_getattr_L__self___stages___2___downsample_0_1_LayerNormalization_1float32[192]Scale〈192〉float32[192]B〈192〉Transpose_inlfunc_timm_layers_norm_LayerNorm2d_getattr_L__self___stages___2___downsample_0_1_Transpose_2DynamicQuantizeLinear_inlfunc_timm_models_convnext_ConvNeXtStage_stages_2_1_torch_nn_modules_container_Sequential_getattr_L__self___stages___2___downsample_1_0_getattr_l__self___stages___2___downsample_0_1_QuantizeLinearMul_inlfunc_timm_models_convnext_ConvNeXtStage_stages_2_1_torch_nn_modules_container_Sequential_getattr_L__self___stages___2___downsample_1_0_torch_nn_modules_conv_Conv2d_getattr_L__self___stages___2___downsample_1_1_1_Conv_0_quant_scales_mulfloat32B = 0.00490160…ConvInteger_inlfunc_timm_models_convnext_ConvNeXtStage_stages_2_1_torch_nn_modules_container_Sequential_getattr_L__self___stages___2___downsample_1_0_torch_nn_modules_conv_Conv2d_getattr_L__self___stages___2___downsample_1_1_1_Conv_0_quantuint8[384,192,2,2]w〈384×192×2×2〉uint8w_zero_point = 133Cast_inlfunc_timm_models_convnext_ConvNeXtStage_stages_2_1_getattr_l__self___stages___2___downsample_1_output_quantized_castMul_inlfunc_timm_models_convnext_ConvNeXtStage_stages_2_1_torch_nn_modules_container_Sequential_getattr_L__self___stages___2___downsample_1_0_torch_nn_modules_conv_Conv2d_getattr_L__self___stages___2___downsample_1_1_1_Conv_0_quant_output_scale_mulAdd_inlfunc_timm_models_convnext_ConvNeXtStage_stages_2_1_getattr_l__self___stages___2___downsample_1_bias_addDynamicQuantizeLinear_inlfunc_timm_models_convnext_ConvNeXtStage_stages_2_1_getattr_l__self___stages___2___downsample_1_QuantizeLinearMul_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___2___blocks_0_1_torch_nn_modules_conv_Conv2d_getattr_getattr_L__self___stages___2___blocks___0___conv_dw_1_0_Conv_0_quant_scales_mulfloat32B = 0.00399155…ConvInteger_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___2___blocks_0_1_torch_nn_modules_conv_Conv2d_getattr_getattr_L__self___stages___2___blocks___0___conv_dw_1_0_Conv_0_quantuint8[384,1,7,7]w〈384×1×7×7〉uint8w_zero_point = 134Cast_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___2___blocks_0_1_getattr_getattr_l__self___stages___2___blocks___0___conv_dw_1_output_quantized_castMul_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___2___blocks_0_1_torch_nn_modules_conv_Conv2d_getattr_getattr_L__self___stages___2___blocks___0___conv_dw_1_0_Conv_0_quant_output_scale_mulAdd_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___2___blocks_0_1_getattr_getattr_l__self___stages___2___blocks___0___conv_dw_1_bias_addTranspose_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___2___blocks_0_1_Transpose_1LayerNormalization_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___2___blocks_0_1_timm_layers_norm_LayerNorm_getattr_getattr_L__self___stages___2___blocks___0___norm_1_2_LayerNormalization_0float32[384]Scale〈384〉float32[384]B〈384〉Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___0___mlp_fc1_1_Reshape_23int64[4]shape〈4〉ReduceMin_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___0___mlp_fc1_1_aten_amin_25_n0int64[1]axes〈1〉ReduceMax_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___0___mlp_fc1_1_aten_amax_27_n0int64[1]axes〈1〉Min_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___0___mlp_fc1_1_aten_minimum_32_n0〈…〉Max_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___0___mlp_fc1_1_aten_maximum_37_n0〈…〉Neg_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___0___mlp_fc1_1_aten_neg_38_n0Max_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___0___mlp_fc1_1_aten_maximum_39_n0Div_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___0___mlp_fc1_1_aten_div_41_n0float32B = 127Max_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___0___mlp_fc1_1_Max_48〈…〉Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___0___mlp_fc1_1_Reshape_54int64[4]shape〈4〉Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___0___mlp_fc1_1_Reshape_80int64[2]shape〈2〉Reciprocal_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___0___mlp_fc1_1_aten_reciprocal_58_n0Expand_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___0___mlp_fc1_1_aten_expand_82_n2int64[2]shape〈2〉Mul_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___0___mlp_fc1_1_Mul_61Round_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___0___mlp_fc1_1_Round_62Add_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___0___mlp_fc1_1_Add_63float32[1,18,18,1]B〈1×18×18×1〉Max_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___0___mlp_fc1_1_Max_68〈…〉Min_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___0___mlp_fc1_1_Min_69〈…〉Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___0___mlp_fc1_1_Reshape_72int64[4]shape〈4〉Cast_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___0___mlp_fc1_1_Cast_73Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___0___mlp_fc1_1_Reshape_77int64[2]shape〈2〉Cast_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___0___mlp_fc1_1_Cast_83DynamicQuantizeLinear_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___0___mlp_fc1_1__to_copy_49_QuantizeLinearMul_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___0___mlp_fc1_1_MatMul_85_quant_scales_mulfloat32B = 1MatMulInteger_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___0___mlp_fc1_1_MatMul_85_quantuint8[384,1536]B〈384×1536〉uint8b_zero_point = 128Cast_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___0___mlp_fc1_1_mm_12_output_quantized_castMul_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___0___mlp_fc1_1_MatMul_85_quant_output_scale_mulCast_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___0___mlp_fc1_1_Cast_86Cast_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___0___mlp_fc1_1_Cast_87Mul_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___0___mlp_fc1_1_Mul_88Mul_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___0___mlp_fc1_1_Mul_89float32[1536]B〈1536〉Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___0___mlp_fc1_1_Reshape_92int64[4]shape〈4〉Add_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___0___mlp_fc1_1_Add_93float32[1536]B〈1536〉Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___0___mlp_fc1_1_Reshape_96int64[2]shape〈2〉Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___0___mlp_fc1_1_Reshape_99int64[4]shape〈4〉Div_inlfunc__aten_gelu_approximate_none|folded_6_n2float32B = 1.41421353…Erf_inlfunc__aten_gelu_approximate_none|folded_6_n3Add_inlfunc__aten_gelu_approximate_none|folded_6_n6float32B = 1Mul_inlfunc__aten_gelu_approximate_none|folded_6_n7Mul_inlfunc__aten_gelu_approximate_none|folded_6_n10float32A = 0.5Abs_inlfunc__aten_linalg_vector_norm_onnx|folded_6_n4ReduceL2_inlfunc__aten_linalg_vector_norm_onnx|folded_6_n8_n1_n3_n3_n3_n0int64[2]axes〈2〉ReduceMean_inlfunc_timm_layers_grn_GlobalResponseNorm_getattr_getattr_L__self___stages___2___blocks___0___mlp_grn_1_ReduceMean_9int64[1]axes〈1〉Add_inlfunc_timm_layers_grn_GlobalResponseNorm_getattr_getattr_L__self___stages___2___blocks___0___mlp_grn_1_Add_11float32B = 9.99999997…Div_inlfunc_timm_layers_grn_GlobalResponseNorm_getattr_getattr_L__self___stages___2___blocks___0___mlp_grn_1_aten_div_12_n0Mul_inlfunc_timm_layers_grn_GlobalResponseNorm_getattr_getattr_L__self___stages___2___blocks___0___mlp_grn_1_Mul_19Mul_inlfunc_aten_addcmul|folded_6_n3float32[1,1,1,1536]A〈1×1×1×1536〉Add_inlfunc_aten_addcmul|folded_6_n4float32[1,1,1,1536]A〈1×1×1×1536〉Add_inlfunc_timm_layers_grn_GlobalResponseNorm_getattr_getattr_L__self___stages___2___blocks___0___mlp_grn_1_Add_21Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___0___mlp_fc2_1_Reshape_23int64[4]shape〈4〉ReduceMin_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___0___mlp_fc2_1_aten_amin_25_n0int64[1]axes〈1〉ReduceMax_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___0___mlp_fc2_1_aten_amax_27_n0int64[1]axes〈1〉Min_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___0___mlp_fc2_1_aten_minimum_32_n0〈…〉Max_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___0___mlp_fc2_1_aten_maximum_37_n0〈…〉Neg_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___0___mlp_fc2_1_aten_neg_38_n0Max_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___0___mlp_fc2_1_aten_maximum_39_n0Div_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___0___mlp_fc2_1_aten_div_41_n0float32B = 127Max_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___0___mlp_fc2_1_Max_48〈…〉Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___0___mlp_fc2_1_Reshape_54int64[4]shape〈4〉Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___0___mlp_fc2_1_Reshape_80int64[2]shape〈2〉Reciprocal_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___0___mlp_fc2_1_aten_reciprocal_58_n0Expand_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___0___mlp_fc2_1_aten_expand_82_n2int64[2]shape〈2〉Mul_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___0___mlp_fc2_1_Mul_61Round_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___0___mlp_fc2_1_Round_62Add_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___0___mlp_fc2_1_Add_63float32[1,18,18,1]B〈1×18×18×1〉Max_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___0___mlp_fc2_1_Max_68〈…〉Min_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___0___mlp_fc2_1_Min_69〈…〉Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___0___mlp_fc2_1_Reshape_72int64[4]shape〈4〉Cast_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___0___mlp_fc2_1_Cast_73Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___0___mlp_fc2_1_Reshape_77int64[2]shape〈2〉Cast_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___0___mlp_fc2_1_Cast_83DynamicQuantizeLinear_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___0___mlp_fc2_1__to_copy_53_QuantizeLinearMul_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___0___mlp_fc2_1_MatMul_85_quant_scales_mulfloat32B = 1MatMulInteger_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___0___mlp_fc2_1_MatMul_85_quantuint8[1536,384]B〈1536×384〉uint8b_zero_point = 128Cast_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___0___mlp_fc2_1_mm_13_output_quantized_castMul_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___0___mlp_fc2_1_MatMul_85_quant_output_scale_mulCast_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___0___mlp_fc2_1_Cast_86Cast_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___0___mlp_fc2_1_Cast_87Mul_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___0___mlp_fc2_1_Mul_88Mul_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___0___mlp_fc2_1_Mul_89float32[384]B〈384〉Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___0___mlp_fc2_1_Reshape_92int64[4]shape〈4〉Add_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___0___mlp_fc2_1_Add_93float32[384]B〈384〉Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___0___mlp_fc2_1_Reshape_96int64[2]shape〈2〉Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___0___mlp_fc2_1_Reshape_99int64[4]shape〈4〉Transpose_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___2___blocks_0_1_Transpose_4Add_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___2___blocks_0_1_Add_5DynamicQuantizeLinear_inlfunc_torch_nn_modules_container_Sequential_getattr_L__self___stages___2___blocks_1_getattr_l__self___stages___2___blocks_0_1_QuantizeLinearMul_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___2___blocks_1_1_torch_nn_modules_conv_Conv2d_getattr_getattr_L__self___stages___2___blocks___1___conv_dw_1_0_Conv_0_quant_scales_mulfloat32B = 0.00353502…ConvInteger_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___2___blocks_1_1_torch_nn_modules_conv_Conv2d_getattr_getattr_L__self___stages___2___blocks___1___conv_dw_1_0_Conv_0_quantuint8[384,1,7,7]w〈384×1×7×7〉uint8w_zero_point = 127Cast_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___2___blocks_1_1_getattr_getattr_l__self___stages___2___blocks___1___conv_dw_1_output_quantized_castMul_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___2___blocks_1_1_torch_nn_modules_conv_Conv2d_getattr_getattr_L__self___stages___2___blocks___1___conv_dw_1_0_Conv_0_quant_output_scale_mulAdd_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___2___blocks_1_1_getattr_getattr_l__self___stages___2___blocks___1___conv_dw_1_bias_addTranspose_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___2___blocks_1_1_Transpose_1LayerNormalization_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___2___blocks_1_1_timm_layers_norm_LayerNorm_getattr_getattr_L__self___stages___2___blocks___1___norm_1_2_LayerNormalization_0float32[384]Scale〈384〉float32[384]B〈384〉Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___1___mlp_fc1_1_Reshape_23int64[4]shape〈4〉ReduceMin_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___1___mlp_fc1_1_aten_amin_25_n0int64[1]axes〈1〉ReduceMax_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___1___mlp_fc1_1_aten_amax_27_n0int64[1]axes〈1〉Min_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___1___mlp_fc1_1_aten_minimum_32_n0〈…〉Max_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___1___mlp_fc1_1_aten_maximum_37_n0〈…〉Neg_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___1___mlp_fc1_1_aten_neg_38_n0Max_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___1___mlp_fc1_1_aten_maximum_39_n0Div_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___1___mlp_fc1_1_aten_div_41_n0float32B = 127Max_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___1___mlp_fc1_1_Max_48〈…〉Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___1___mlp_fc1_1_Reshape_54int64[4]shape〈4〉Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___1___mlp_fc1_1_Reshape_80int64[2]shape〈2〉Reciprocal_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___1___mlp_fc1_1_aten_reciprocal_58_n0Expand_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___1___mlp_fc1_1_aten_expand_82_n2int64[2]shape〈2〉Mul_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___1___mlp_fc1_1_Mul_61Round_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___1___mlp_fc1_1_Round_62Add_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___1___mlp_fc1_1_Add_63float32[1,18,18,1]B〈1×18×18×1〉Max_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___1___mlp_fc1_1_Max_68〈…〉Min_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___1___mlp_fc1_1_Min_69〈…〉Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___1___mlp_fc1_1_Reshape_72int64[4]shape〈4〉Cast_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___1___mlp_fc1_1_Cast_73Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___1___mlp_fc1_1_Reshape_77int64[2]shape〈2〉Cast_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___1___mlp_fc1_1_Cast_83DynamicQuantizeLinear_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___1___mlp_fc1_1__to_copy_57_QuantizeLinearMul_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___1___mlp_fc1_1_MatMul_85_quant_scales_mulfloat32B = 1MatMulInteger_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___1___mlp_fc1_1_MatMul_85_quantuint8[384,1536]B〈384×1536〉uint8b_zero_point = 128Cast_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___1___mlp_fc1_1_mm_14_output_quantized_castMul_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___1___mlp_fc1_1_MatMul_85_quant_output_scale_mulCast_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___1___mlp_fc1_1_Cast_86Cast_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___1___mlp_fc1_1_Cast_87Mul_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___1___mlp_fc1_1_Mul_88Mul_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___1___mlp_fc1_1_Mul_89float32[1536]B〈1536〉Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___1___mlp_fc1_1_Reshape_92int64[4]shape〈4〉Add_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___1___mlp_fc1_1_Add_93float32[1536]B〈1536〉Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___1___mlp_fc1_1_Reshape_96int64[2]shape〈2〉Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___1___mlp_fc1_1_Reshape_99int64[4]shape〈4〉Div_inlfunc__aten_gelu_approximate_none|folded_7_n2float32B = 1.41421353…Erf_inlfunc__aten_gelu_approximate_none|folded_7_n3Add_inlfunc__aten_gelu_approximate_none|folded_7_n6float32B = 1Mul_inlfunc__aten_gelu_approximate_none|folded_7_n7Mul_inlfunc__aten_gelu_approximate_none|folded_7_n10float32A = 0.5Abs_inlfunc__aten_linalg_vector_norm_onnx|folded_7_n4ReduceL2_inlfunc__aten_linalg_vector_norm_onnx|folded_7_n8_n1_n3_n3_n3_n0int64[2]axes〈2〉ReduceMean_inlfunc_timm_layers_grn_GlobalResponseNorm_getattr_getattr_L__self___stages___2___blocks___1___mlp_grn_1_ReduceMean_9int64[1]axes〈1〉Add_inlfunc_timm_layers_grn_GlobalResponseNorm_getattr_getattr_L__self___stages___2___blocks___1___mlp_grn_1_Add_11float32B = 9.99999997…Div_inlfunc_timm_layers_grn_GlobalResponseNorm_getattr_getattr_L__self___stages___2___blocks___1___mlp_grn_1_aten_div_12_n0Mul_inlfunc_timm_layers_grn_GlobalResponseNorm_getattr_getattr_L__self___stages___2___blocks___1___mlp_grn_1_Mul_19Mul_inlfunc_aten_addcmul|folded_7_n3float32[1,1,1,1536]A〈1×1×1×1536〉Add_inlfunc_aten_addcmul|folded_7_n4float32[1,1,1,1536]A〈1×1×1×1536〉Add_inlfunc_timm_layers_grn_GlobalResponseNorm_getattr_getattr_L__self___stages___2___blocks___1___mlp_grn_1_Add_21Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___1___mlp_fc2_1_Reshape_23int64[4]shape〈4〉ReduceMin_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___1___mlp_fc2_1_aten_amin_25_n0int64[1]axes〈1〉ReduceMax_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___1___mlp_fc2_1_aten_amax_27_n0int64[1]axes〈1〉Min_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___1___mlp_fc2_1_aten_minimum_32_n0〈…〉Max_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___1___mlp_fc2_1_aten_maximum_37_n0〈…〉Neg_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___1___mlp_fc2_1_aten_neg_38_n0Max_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___1___mlp_fc2_1_aten_maximum_39_n0Div_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___1___mlp_fc2_1_aten_div_41_n0float32B = 127Max_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___1___mlp_fc2_1_Max_48〈…〉Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___1___mlp_fc2_1_Reshape_54int64[4]shape〈4〉Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___1___mlp_fc2_1_Reshape_80int64[2]shape〈2〉Reciprocal_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___1___mlp_fc2_1_aten_reciprocal_58_n0Expand_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___1___mlp_fc2_1_aten_expand_82_n2int64[2]shape〈2〉Mul_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___1___mlp_fc2_1_Mul_61Round_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___1___mlp_fc2_1_Round_62Add_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___1___mlp_fc2_1_Add_63float32[1,18,18,1]B〈1×18×18×1〉Max_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___1___mlp_fc2_1_Max_68〈…〉Min_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___1___mlp_fc2_1_Min_69〈…〉Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___1___mlp_fc2_1_Reshape_72int64[4]shape〈4〉Cast_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___1___mlp_fc2_1_Cast_73Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___1___mlp_fc2_1_Reshape_77int64[2]shape〈2〉Cast_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___1___mlp_fc2_1_Cast_83DynamicQuantizeLinear_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___1___mlp_fc2_1__to_copy_61_QuantizeLinearMul_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___1___mlp_fc2_1_MatMul_85_quant_scales_mulfloat32B = 1MatMulInteger_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___1___mlp_fc2_1_MatMul_85_quantuint8[1536,384]B〈1536×384〉uint8b_zero_point = 128Cast_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___1___mlp_fc2_1_mm_15_output_quantized_castMul_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___1___mlp_fc2_1_MatMul_85_quant_output_scale_mulCast_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___1___mlp_fc2_1_Cast_86Cast_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___1___mlp_fc2_1_Cast_87Mul_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___1___mlp_fc2_1_Mul_88Mul_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___1___mlp_fc2_1_Mul_89float32[384]B〈384〉Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___1___mlp_fc2_1_Reshape_92int64[4]shape〈4〉Add_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___1___mlp_fc2_1_Add_93float32[384]B〈384〉Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___1___mlp_fc2_1_Reshape_96int64[2]shape〈2〉Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___1___mlp_fc2_1_Reshape_99int64[4]shape〈4〉Transpose_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___2___blocks_1_1_Transpose_4Add_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___2___blocks_1_1_Add_5DynamicQuantizeLinear_inlfunc_torch_nn_modules_container_Sequential_getattr_L__self___stages___2___blocks_1_getattr_l__self___stages___2___blocks_1_1_QuantizeLinearMul_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___2___blocks_2_1_torch_nn_modules_conv_Conv2d_getattr_getattr_L__self___stages___2___blocks___2___conv_dw_1_0_Conv_0_quant_scales_mulfloat32B = 0.00519265…ConvInteger_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___2___blocks_2_1_torch_nn_modules_conv_Conv2d_getattr_getattr_L__self___stages___2___blocks___2___conv_dw_1_0_Conv_0_quantuint8[384,1,7,7]w〈384×1×7×7〉uint8w_zero_point = 85Cast_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___2___blocks_2_1_getattr_getattr_l__self___stages___2___blocks___2___conv_dw_1_output_quantized_castMul_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___2___blocks_2_1_torch_nn_modules_conv_Conv2d_getattr_getattr_L__self___stages___2___blocks___2___conv_dw_1_0_Conv_0_quant_output_scale_mulAdd_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___2___blocks_2_1_getattr_getattr_l__self___stages___2___blocks___2___conv_dw_1_bias_addTranspose_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___2___blocks_2_1_Transpose_1LayerNormalization_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___2___blocks_2_1_timm_layers_norm_LayerNorm_getattr_getattr_L__self___stages___2___blocks___2___norm_1_2_LayerNormalization_0float32[384]Scale〈384〉float32[384]B〈384〉Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___2___mlp_fc1_1_Reshape_23int64[4]shape〈4〉ReduceMin_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___2___mlp_fc1_1_aten_amin_25_n0int64[1]axes〈1〉ReduceMax_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___2___mlp_fc1_1_aten_amax_27_n0int64[1]axes〈1〉Min_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___2___mlp_fc1_1_aten_minimum_32_n0〈…〉Max_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___2___mlp_fc1_1_aten_maximum_37_n0〈…〉Neg_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___2___mlp_fc1_1_aten_neg_38_n0Max_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___2___mlp_fc1_1_aten_maximum_39_n0Div_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___2___mlp_fc1_1_aten_div_41_n0float32B = 127Max_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___2___mlp_fc1_1_Max_48〈…〉Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___2___mlp_fc1_1_Reshape_54int64[4]shape〈4〉Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___2___mlp_fc1_1_Reshape_80int64[2]shape〈2〉Reciprocal_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___2___mlp_fc1_1_aten_reciprocal_58_n0Expand_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___2___mlp_fc1_1_aten_expand_82_n2int64[2]shape〈2〉Mul_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___2___mlp_fc1_1_Mul_61Round_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___2___mlp_fc1_1_Round_62Add_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___2___mlp_fc1_1_Add_63float32[1,18,18,1]B〈1×18×18×1〉Max_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___2___mlp_fc1_1_Max_68〈…〉Min_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___2___mlp_fc1_1_Min_69〈…〉Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___2___mlp_fc1_1_Reshape_72int64[4]shape〈4〉Cast_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___2___mlp_fc1_1_Cast_73Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___2___mlp_fc1_1_Reshape_77int64[2]shape〈2〉Cast_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___2___mlp_fc1_1_Cast_83DynamicQuantizeLinear_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___2___mlp_fc1_1__to_copy_65_QuantizeLinearMul_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___2___mlp_fc1_1_MatMul_85_quant_scales_mulfloat32B = 1MatMulInteger_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___2___mlp_fc1_1_MatMul_85_quantuint8[384,1536]B〈384×1536〉uint8b_zero_point = 128Cast_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___2___mlp_fc1_1_mm_16_output_quantized_castMul_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___2___mlp_fc1_1_MatMul_85_quant_output_scale_mulCast_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___2___mlp_fc1_1_Cast_86Cast_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___2___mlp_fc1_1_Cast_87Mul_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___2___mlp_fc1_1_Mul_88Mul_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___2___mlp_fc1_1_Mul_89float32[1536]B〈1536〉Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___2___mlp_fc1_1_Reshape_92int64[4]shape〈4〉Add_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___2___mlp_fc1_1_Add_93float32[1536]B〈1536〉Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___2___mlp_fc1_1_Reshape_96int64[2]shape〈2〉Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___2___mlp_fc1_1_Reshape_99int64[4]shape〈4〉Div_inlfunc__aten_gelu_approximate_none|folded_8_n2float32B = 1.41421353…Erf_inlfunc__aten_gelu_approximate_none|folded_8_n3Add_inlfunc__aten_gelu_approximate_none|folded_8_n6float32B = 1Mul_inlfunc__aten_gelu_approximate_none|folded_8_n7Mul_inlfunc__aten_gelu_approximate_none|folded_8_n10float32A = 0.5Abs_inlfunc__aten_linalg_vector_norm_onnx|folded_8_n4ReduceL2_inlfunc__aten_linalg_vector_norm_onnx|folded_8_n8_n1_n3_n3_n3_n0int64[2]axes〈2〉ReduceMean_inlfunc_timm_layers_grn_GlobalResponseNorm_getattr_getattr_L__self___stages___2___blocks___2___mlp_grn_1_ReduceMean_9int64[1]axes〈1〉Add_inlfunc_timm_layers_grn_GlobalResponseNorm_getattr_getattr_L__self___stages___2___blocks___2___mlp_grn_1_Add_11float32B = 9.99999997…Div_inlfunc_timm_layers_grn_GlobalResponseNorm_getattr_getattr_L__self___stages___2___blocks___2___mlp_grn_1_aten_div_12_n0Mul_inlfunc_timm_layers_grn_GlobalResponseNorm_getattr_getattr_L__self___stages___2___blocks___2___mlp_grn_1_Mul_19Mul_inlfunc_aten_addcmul|folded_8_n3float32[1,1,1,1536]A〈1×1×1×1536〉Add_inlfunc_aten_addcmul|folded_8_n4float32[1,1,1,1536]A〈1×1×1×1536〉Add_inlfunc_timm_layers_grn_GlobalResponseNorm_getattr_getattr_L__self___stages___2___blocks___2___mlp_grn_1_Add_21Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___2___mlp_fc2_1_Reshape_23int64[4]shape〈4〉ReduceMin_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___2___mlp_fc2_1_aten_amin_25_n0int64[1]axes〈1〉ReduceMax_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___2___mlp_fc2_1_aten_amax_27_n0int64[1]axes〈1〉Min_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___2___mlp_fc2_1_aten_minimum_32_n0〈…〉Max_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___2___mlp_fc2_1_aten_maximum_37_n0〈…〉Neg_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___2___mlp_fc2_1_aten_neg_38_n0Max_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___2___mlp_fc2_1_aten_maximum_39_n0Div_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___2___mlp_fc2_1_aten_div_41_n0float32B = 127Max_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___2___mlp_fc2_1_Max_48〈…〉Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___2___mlp_fc2_1_Reshape_54int64[4]shape〈4〉Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___2___mlp_fc2_1_Reshape_80int64[2]shape〈2〉Reciprocal_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___2___mlp_fc2_1_aten_reciprocal_58_n0Expand_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___2___mlp_fc2_1_aten_expand_82_n2int64[2]shape〈2〉Mul_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___2___mlp_fc2_1_Mul_61Round_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___2___mlp_fc2_1_Round_62Add_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___2___mlp_fc2_1_Add_63float32[1,18,18,1]B〈1×18×18×1〉Max_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___2___mlp_fc2_1_Max_68〈…〉Min_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___2___mlp_fc2_1_Min_69〈…〉Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___2___mlp_fc2_1_Reshape_72int64[4]shape〈4〉Cast_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___2___mlp_fc2_1_Cast_73Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___2___mlp_fc2_1_Reshape_77int64[2]shape〈2〉Cast_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___2___mlp_fc2_1_Cast_83DynamicQuantizeLinear_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___2___mlp_fc2_1__to_copy_69_QuantizeLinearMul_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___2___mlp_fc2_1_MatMul_85_quant_scales_mulfloat32B = 1MatMulInteger_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___2___mlp_fc2_1_MatMul_85_quantuint8[1536,384]B〈1536×384〉uint8b_zero_point = 128Cast_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___2___mlp_fc2_1_mm_17_output_quantized_castMul_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___2___mlp_fc2_1_MatMul_85_quant_output_scale_mulCast_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___2___mlp_fc2_1_Cast_86Cast_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___2___mlp_fc2_1_Cast_87Mul_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___2___mlp_fc2_1_Mul_88Mul_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___2___mlp_fc2_1_Mul_89float32[384]B〈384〉Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___2___mlp_fc2_1_Reshape_92int64[4]shape〈4〉Add_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___2___mlp_fc2_1_Add_93float32[384]B〈384〉Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___2___mlp_fc2_1_Reshape_96int64[2]shape〈2〉Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___2___mlp_fc2_1_Reshape_99int64[4]shape〈4〉Transpose_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___2___blocks_2_1_Transpose_4Add_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___2___blocks_2_1_Add_5DynamicQuantizeLinear_inlfunc_torch_nn_modules_container_Sequential_getattr_L__self___stages___2___blocks_1_getattr_l__self___stages___2___blocks_2_1_QuantizeLinearMul_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___2___blocks_3_1_torch_nn_modules_conv_Conv2d_getattr_getattr_L__self___stages___2___blocks___3___conv_dw_1_0_Conv_0_quant_scales_mulfloat32B = 0.00584665…ConvInteger_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___2___blocks_3_1_torch_nn_modules_conv_Conv2d_getattr_getattr_L__self___stages___2___blocks___3___conv_dw_1_0_Conv_0_quantuint8[384,1,7,7]w〈384×1×7×7〉uint8w_zero_point = 186Cast_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___2___blocks_3_1_getattr_getattr_l__self___stages___2___blocks___3___conv_dw_1_output_quantized_castMul_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___2___blocks_3_1_torch_nn_modules_conv_Conv2d_getattr_getattr_L__self___stages___2___blocks___3___conv_dw_1_0_Conv_0_quant_output_scale_mulAdd_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___2___blocks_3_1_getattr_getattr_l__self___stages___2___blocks___3___conv_dw_1_bias_addTranspose_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___2___blocks_3_1_Transpose_1LayerNormalization_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___2___blocks_3_1_timm_layers_norm_LayerNorm_getattr_getattr_L__self___stages___2___blocks___3___norm_1_2_LayerNormalization_0float32[384]Scale〈384〉float32[384]B〈384〉Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___3___mlp_fc1_1_Reshape_23int64[4]shape〈4〉ReduceMin_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___3___mlp_fc1_1_aten_amin_25_n0int64[1]axes〈1〉ReduceMax_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___3___mlp_fc1_1_aten_amax_27_n0int64[1]axes〈1〉Min_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___3___mlp_fc1_1_aten_minimum_32_n0〈…〉Max_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___3___mlp_fc1_1_aten_maximum_37_n0〈…〉Neg_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___3___mlp_fc1_1_aten_neg_38_n0Max_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___3___mlp_fc1_1_aten_maximum_39_n0Div_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___3___mlp_fc1_1_aten_div_41_n0float32B = 127Max_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___3___mlp_fc1_1_Max_48〈…〉Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___3___mlp_fc1_1_Reshape_54int64[4]shape〈4〉Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___3___mlp_fc1_1_Reshape_80int64[2]shape〈2〉Reciprocal_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___3___mlp_fc1_1_aten_reciprocal_58_n0Expand_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___3___mlp_fc1_1_aten_expand_82_n2int64[2]shape〈2〉Mul_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___3___mlp_fc1_1_Mul_61Round_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___3___mlp_fc1_1_Round_62Add_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___3___mlp_fc1_1_Add_63float32[1,18,18,1]B〈1×18×18×1〉Max_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___3___mlp_fc1_1_Max_68〈…〉Min_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___3___mlp_fc1_1_Min_69〈…〉Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___3___mlp_fc1_1_Reshape_72int64[4]shape〈4〉Cast_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___3___mlp_fc1_1_Cast_73Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___3___mlp_fc1_1_Reshape_77int64[2]shape〈2〉Cast_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___3___mlp_fc1_1_Cast_83DynamicQuantizeLinear_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___3___mlp_fc1_1__to_copy_73_QuantizeLinearMul_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___3___mlp_fc1_1_MatMul_85_quant_scales_mulfloat32B = 1MatMulInteger_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___3___mlp_fc1_1_MatMul_85_quantuint8[384,1536]B〈384×1536〉uint8b_zero_point = 128Cast_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___3___mlp_fc1_1_mm_18_output_quantized_castMul_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___3___mlp_fc1_1_MatMul_85_quant_output_scale_mulCast_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___3___mlp_fc1_1_Cast_86Cast_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___3___mlp_fc1_1_Cast_87Mul_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___3___mlp_fc1_1_Mul_88Mul_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___3___mlp_fc1_1_Mul_89float32[1536]B〈1536〉Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___3___mlp_fc1_1_Reshape_92int64[4]shape〈4〉Add_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___3___mlp_fc1_1_Add_93float32[1536]B〈1536〉Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___3___mlp_fc1_1_Reshape_96int64[2]shape〈2〉Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___3___mlp_fc1_1_Reshape_99int64[4]shape〈4〉Div_inlfunc__aten_gelu_approximate_none|folded_9_n2float32B = 1.41421353…Erf_inlfunc__aten_gelu_approximate_none|folded_9_n3Add_inlfunc__aten_gelu_approximate_none|folded_9_n6float32B = 1Mul_inlfunc__aten_gelu_approximate_none|folded_9_n7Mul_inlfunc__aten_gelu_approximate_none|folded_9_n10float32A = 0.5Abs_inlfunc__aten_linalg_vector_norm_onnx|folded_9_n4ReduceL2_inlfunc__aten_linalg_vector_norm_onnx|folded_9_n8_n1_n3_n3_n3_n0int64[2]axes〈2〉ReduceMean_inlfunc_timm_layers_grn_GlobalResponseNorm_getattr_getattr_L__self___stages___2___blocks___3___mlp_grn_1_ReduceMean_9int64[1]axes〈1〉Add_inlfunc_timm_layers_grn_GlobalResponseNorm_getattr_getattr_L__self___stages___2___blocks___3___mlp_grn_1_Add_11float32B = 9.99999997…Div_inlfunc_timm_layers_grn_GlobalResponseNorm_getattr_getattr_L__self___stages___2___blocks___3___mlp_grn_1_aten_div_12_n0Mul_inlfunc_timm_layers_grn_GlobalResponseNorm_getattr_getattr_L__self___stages___2___blocks___3___mlp_grn_1_Mul_19Mul_inlfunc_aten_addcmul|folded_9_n3float32[1,1,1,1536]A〈1×1×1×1536〉Add_inlfunc_aten_addcmul|folded_9_n4float32[1,1,1,1536]A〈1×1×1×1536〉Add_inlfunc_timm_layers_grn_GlobalResponseNorm_getattr_getattr_L__self___stages___2___blocks___3___mlp_grn_1_Add_21Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___3___mlp_fc2_1_Reshape_23int64[4]shape〈4〉ReduceMin_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___3___mlp_fc2_1_aten_amin_25_n0int64[1]axes〈1〉ReduceMax_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___3___mlp_fc2_1_aten_amax_27_n0int64[1]axes〈1〉Min_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___3___mlp_fc2_1_aten_minimum_32_n0〈…〉Max_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___3___mlp_fc2_1_aten_maximum_37_n0〈…〉Neg_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___3___mlp_fc2_1_aten_neg_38_n0Max_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___3___mlp_fc2_1_aten_maximum_39_n0Div_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___3___mlp_fc2_1_aten_div_41_n0float32B = 127Max_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___3___mlp_fc2_1_Max_48〈…〉Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___3___mlp_fc2_1_Reshape_54int64[4]shape〈4〉Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___3___mlp_fc2_1_Reshape_80int64[2]shape〈2〉Reciprocal_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___3___mlp_fc2_1_aten_reciprocal_58_n0Expand_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___3___mlp_fc2_1_aten_expand_82_n2int64[2]shape〈2〉Mul_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___3___mlp_fc2_1_Mul_61Round_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___3___mlp_fc2_1_Round_62Add_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___3___mlp_fc2_1_Add_63float32[1,18,18,1]B〈1×18×18×1〉Max_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___3___mlp_fc2_1_Max_68〈…〉Min_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___3___mlp_fc2_1_Min_69〈…〉Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___3___mlp_fc2_1_Reshape_72int64[4]shape〈4〉Cast_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___3___mlp_fc2_1_Cast_73Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___3___mlp_fc2_1_Reshape_77int64[2]shape〈2〉Cast_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___3___mlp_fc2_1_Cast_83DynamicQuantizeLinear_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___3___mlp_fc2_1__to_copy_77_QuantizeLinearMul_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___3___mlp_fc2_1_MatMul_85_quant_scales_mulfloat32B = 1MatMulInteger_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___3___mlp_fc2_1_MatMul_85_quantuint8[1536,384]B〈1536×384〉uint8b_zero_point = 128Cast_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___3___mlp_fc2_1_mm_19_output_quantized_castMul_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___3___mlp_fc2_1_MatMul_85_quant_output_scale_mulCast_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___3___mlp_fc2_1_Cast_86Cast_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___3___mlp_fc2_1_Cast_87Mul_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___3___mlp_fc2_1_Mul_88Mul_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___3___mlp_fc2_1_Mul_89float32[384]B〈384〉Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___3___mlp_fc2_1_Reshape_92int64[4]shape〈4〉Add_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___3___mlp_fc2_1_Add_93float32[384]B〈384〉Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___3___mlp_fc2_1_Reshape_96int64[2]shape〈2〉Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___3___mlp_fc2_1_Reshape_99int64[4]shape〈4〉Transpose_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___2___blocks_3_1_Transpose_4Add_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___2___blocks_3_1_Add_5DynamicQuantizeLinear_inlfunc_torch_nn_modules_container_Sequential_getattr_L__self___stages___2___blocks_1_getattr_l__self___stages___2___blocks_3_1_QuantizeLinearMul_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___2___blocks_4_1_torch_nn_modules_conv_Conv2d_getattr_getattr_L__self___stages___2___blocks___4___conv_dw_1_0_Conv_0_quant_scales_mulfloat32B = 0.00614256…ConvInteger_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___2___blocks_4_1_torch_nn_modules_conv_Conv2d_getattr_getattr_L__self___stages___2___blocks___4___conv_dw_1_0_Conv_0_quantuint8[384,1,7,7]w〈384×1×7×7〉uint8w_zero_point = 105Cast_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___2___blocks_4_1_getattr_getattr_l__self___stages___2___blocks___4___conv_dw_1_output_quantized_castMul_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___2___blocks_4_1_torch_nn_modules_conv_Conv2d_getattr_getattr_L__self___stages___2___blocks___4___conv_dw_1_0_Conv_0_quant_output_scale_mulAdd_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___2___blocks_4_1_getattr_getattr_l__self___stages___2___blocks___4___conv_dw_1_bias_addTranspose_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___2___blocks_4_1_Transpose_1LayerNormalization_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___2___blocks_4_1_timm_layers_norm_LayerNorm_getattr_getattr_L__self___stages___2___blocks___4___norm_1_2_LayerNormalization_0float32[384]Scale〈384〉float32[384]B〈384〉Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___4___mlp_fc1_1_Reshape_23int64[4]shape〈4〉ReduceMin_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___4___mlp_fc1_1_aten_amin_25_n0int64[1]axes〈1〉ReduceMax_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___4___mlp_fc1_1_aten_amax_27_n0int64[1]axes〈1〉Min_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___4___mlp_fc1_1_aten_minimum_32_n0〈…〉Max_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___4___mlp_fc1_1_aten_maximum_37_n0〈…〉Neg_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___4___mlp_fc1_1_aten_neg_38_n0Max_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___4___mlp_fc1_1_aten_maximum_39_n0Div_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___4___mlp_fc1_1_aten_div_41_n0float32B = 127Max_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___4___mlp_fc1_1_Max_48〈…〉Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___4___mlp_fc1_1_Reshape_54int64[4]shape〈4〉Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___4___mlp_fc1_1_Reshape_80int64[2]shape〈2〉Reciprocal_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___4___mlp_fc1_1_aten_reciprocal_58_n0Expand_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___4___mlp_fc1_1_aten_expand_82_n2int64[2]shape〈2〉Mul_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___4___mlp_fc1_1_Mul_61Round_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___4___mlp_fc1_1_Round_62Add_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___4___mlp_fc1_1_Add_63float32[1,18,18,1]B〈1×18×18×1〉Max_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___4___mlp_fc1_1_Max_68〈…〉Min_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___4___mlp_fc1_1_Min_69〈…〉Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___4___mlp_fc1_1_Reshape_72int64[4]shape〈4〉Cast_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___4___mlp_fc1_1_Cast_73Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___4___mlp_fc1_1_Reshape_77int64[2]shape〈2〉Cast_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___4___mlp_fc1_1_Cast_83DynamicQuantizeLinear_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___4___mlp_fc1_1__to_copy_81_QuantizeLinearMul_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___4___mlp_fc1_1_MatMul_85_quant_scales_mulfloat32B = 1MatMulInteger_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___4___mlp_fc1_1_MatMul_85_quantuint8[384,1536]B〈384×1536〉uint8b_zero_point = 128Cast_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___4___mlp_fc1_1_mm_20_output_quantized_castMul_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___4___mlp_fc1_1_MatMul_85_quant_output_scale_mulCast_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___4___mlp_fc1_1_Cast_86Cast_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___4___mlp_fc1_1_Cast_87Mul_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___4___mlp_fc1_1_Mul_88Mul_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___4___mlp_fc1_1_Mul_89float32[1536]B〈1536〉Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___4___mlp_fc1_1_Reshape_92int64[4]shape〈4〉Add_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___4___mlp_fc1_1_Add_93float32[1536]B〈1536〉Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___4___mlp_fc1_1_Reshape_96int64[2]shape〈2〉Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___4___mlp_fc1_1_Reshape_99int64[4]shape〈4〉Div_inlfunc__aten_gelu_approximate_none|folded_10_n2float32B = 1.41421353…Erf_inlfunc__aten_gelu_approximate_none|folded_10_n3Add_inlfunc__aten_gelu_approximate_none|folded_10_n6float32B = 1Mul_inlfunc__aten_gelu_approximate_none|folded_10_n7Mul_inlfunc__aten_gelu_approximate_none|folded_10_n10float32A = 0.5Abs_inlfunc__aten_linalg_vector_norm_onnx|folded_10_n4ReduceL2_inlfunc__aten_linalg_vector_norm_onnx|folded_10_n8_n1_n3_n3_n3_n0int64[2]axes〈2〉ReduceMean_inlfunc_timm_layers_grn_GlobalResponseNorm_getattr_getattr_L__self___stages___2___blocks___4___mlp_grn_1_ReduceMean_9int64[1]axes〈1〉Add_inlfunc_timm_layers_grn_GlobalResponseNorm_getattr_getattr_L__self___stages___2___blocks___4___mlp_grn_1_Add_11float32B = 9.99999997…Div_inlfunc_timm_layers_grn_GlobalResponseNorm_getattr_getattr_L__self___stages___2___blocks___4___mlp_grn_1_aten_div_12_n0Mul_inlfunc_timm_layers_grn_GlobalResponseNorm_getattr_getattr_L__self___stages___2___blocks___4___mlp_grn_1_Mul_19Mul_inlfunc_aten_addcmul|folded_10_n3float32[1,1,1,1536]A〈1×1×1×1536〉Add_inlfunc_aten_addcmul|folded_10_n4float32[1,1,1,1536]A〈1×1×1×1536〉Add_inlfunc_timm_layers_grn_GlobalResponseNorm_getattr_getattr_L__self___stages___2___blocks___4___mlp_grn_1_Add_21Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___4___mlp_fc2_1_Reshape_23int64[4]shape〈4〉ReduceMin_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___4___mlp_fc2_1_aten_amin_25_n0int64[1]axes〈1〉ReduceMax_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___4___mlp_fc2_1_aten_amax_27_n0int64[1]axes〈1〉Min_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___4___mlp_fc2_1_aten_minimum_32_n0〈…〉Max_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___4___mlp_fc2_1_aten_maximum_37_n0〈…〉Neg_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___4___mlp_fc2_1_aten_neg_38_n0Max_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___4___mlp_fc2_1_aten_maximum_39_n0Div_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___4___mlp_fc2_1_aten_div_41_n0float32B = 127Max_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___4___mlp_fc2_1_Max_48〈…〉Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___4___mlp_fc2_1_Reshape_54int64[4]shape〈4〉Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___4___mlp_fc2_1_Reshape_80int64[2]shape〈2〉Reciprocal_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___4___mlp_fc2_1_aten_reciprocal_58_n0Expand_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___4___mlp_fc2_1_aten_expand_82_n2int64[2]shape〈2〉Mul_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___4___mlp_fc2_1_Mul_61Round_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___4___mlp_fc2_1_Round_62Add_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___4___mlp_fc2_1_Add_63float32[1,18,18,1]B〈1×18×18×1〉Max_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___4___mlp_fc2_1_Max_68〈…〉Min_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___4___mlp_fc2_1_Min_69〈…〉Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___4___mlp_fc2_1_Reshape_72int64[4]shape〈4〉Cast_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___4___mlp_fc2_1_Cast_73Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___4___mlp_fc2_1_Reshape_77int64[2]shape〈2〉Cast_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___4___mlp_fc2_1_Cast_83DynamicQuantizeLinear_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___4___mlp_fc2_1__to_copy_85_QuantizeLinearMul_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___4___mlp_fc2_1_MatMul_85_quant_scales_mulfloat32B = 1MatMulInteger_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___4___mlp_fc2_1_MatMul_85_quantuint8[1536,384]B〈1536×384〉uint8b_zero_point = 128Cast_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___4___mlp_fc2_1_mm_21_output_quantized_castMul_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___4___mlp_fc2_1_MatMul_85_quant_output_scale_mulCast_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___4___mlp_fc2_1_Cast_86Cast_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___4___mlp_fc2_1_Cast_87Mul_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___4___mlp_fc2_1_Mul_88Mul_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___4___mlp_fc2_1_Mul_89float32[384]B〈384〉Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___4___mlp_fc2_1_Reshape_92int64[4]shape〈4〉Add_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___4___mlp_fc2_1_Add_93float32[384]B〈384〉Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___4___mlp_fc2_1_Reshape_96int64[2]shape〈2〉Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___4___mlp_fc2_1_Reshape_99int64[4]shape〈4〉Transpose_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___2___blocks_4_1_Transpose_4Add_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___2___blocks_4_1_Add_5DynamicQuantizeLinear_inlfunc_torch_nn_modules_container_Sequential_getattr_L__self___stages___2___blocks_1_getattr_l__self___stages___2___blocks_4_1_QuantizeLinearMul_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___2___blocks_5_1_torch_nn_modules_conv_Conv2d_getattr_getattr_L__self___stages___2___blocks___5___conv_dw_1_0_Conv_0_quant_scales_mulfloat32B = 0.00459072…ConvInteger_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___2___blocks_5_1_torch_nn_modules_conv_Conv2d_getattr_getattr_L__self___stages___2___blocks___5___conv_dw_1_0_Conv_0_quantuint8[384,1,7,7]w〈384×1×7×7〉uint8w_zero_point = 157Cast_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___2___blocks_5_1_getattr_getattr_l__self___stages___2___blocks___5___conv_dw_1_output_quantized_castMul_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___2___blocks_5_1_torch_nn_modules_conv_Conv2d_getattr_getattr_L__self___stages___2___blocks___5___conv_dw_1_0_Conv_0_quant_output_scale_mulAdd_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___2___blocks_5_1_getattr_getattr_l__self___stages___2___blocks___5___conv_dw_1_bias_addTranspose_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___2___blocks_5_1_Transpose_1LayerNormalization_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___2___blocks_5_1_timm_layers_norm_LayerNorm_getattr_getattr_L__self___stages___2___blocks___5___norm_1_2_LayerNormalization_0float32[384]Scale〈384〉float32[384]B〈384〉Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___5___mlp_fc1_1_Reshape_23int64[4]shape〈4〉ReduceMin_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___5___mlp_fc1_1_aten_amin_25_n0int64[1]axes〈1〉ReduceMax_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___5___mlp_fc1_1_aten_amax_27_n0int64[1]axes〈1〉Min_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___5___mlp_fc1_1_aten_minimum_32_n0〈…〉Max_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___5___mlp_fc1_1_aten_maximum_37_n0〈…〉Neg_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___5___mlp_fc1_1_aten_neg_38_n0Max_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___5___mlp_fc1_1_aten_maximum_39_n0Div_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___5___mlp_fc1_1_aten_div_41_n0float32B = 127Max_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___5___mlp_fc1_1_Max_48〈…〉Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___5___mlp_fc1_1_Reshape_54int64[4]shape〈4〉Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___5___mlp_fc1_1_Reshape_80int64[2]shape〈2〉Reciprocal_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___5___mlp_fc1_1_aten_reciprocal_58_n0Expand_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___5___mlp_fc1_1_aten_expand_82_n2int64[2]shape〈2〉Mul_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___5___mlp_fc1_1_Mul_61Round_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___5___mlp_fc1_1_Round_62Add_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___5___mlp_fc1_1_Add_63float32[1,18,18,1]B〈1×18×18×1〉Max_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___5___mlp_fc1_1_Max_68〈…〉Min_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___5___mlp_fc1_1_Min_69〈…〉Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___5___mlp_fc1_1_Reshape_72int64[4]shape〈4〉Cast_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___5___mlp_fc1_1_Cast_73Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___5___mlp_fc1_1_Reshape_77int64[2]shape〈2〉Cast_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___5___mlp_fc1_1_Cast_83DynamicQuantizeLinear_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___5___mlp_fc1_1__to_copy_89_QuantizeLinearMul_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___5___mlp_fc1_1_MatMul_85_quant_scales_mulfloat32B = 1MatMulInteger_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___5___mlp_fc1_1_MatMul_85_quantuint8[384,1536]B〈384×1536〉uint8b_zero_point = 128Cast_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___5___mlp_fc1_1_mm_22_output_quantized_castMul_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___5___mlp_fc1_1_MatMul_85_quant_output_scale_mulCast_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___5___mlp_fc1_1_Cast_86Cast_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___5___mlp_fc1_1_Cast_87Mul_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___5___mlp_fc1_1_Mul_88Mul_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___5___mlp_fc1_1_Mul_89float32[1536]B〈1536〉Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___5___mlp_fc1_1_Reshape_92int64[4]shape〈4〉Add_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___5___mlp_fc1_1_Add_93float32[1536]B〈1536〉Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___5___mlp_fc1_1_Reshape_96int64[2]shape〈2〉Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___5___mlp_fc1_1_Reshape_99int64[4]shape〈4〉Div_inlfunc__aten_gelu_approximate_none|folded_11_n2float32B = 1.41421353…Erf_inlfunc__aten_gelu_approximate_none|folded_11_n3Add_inlfunc__aten_gelu_approximate_none|folded_11_n6float32B = 1Mul_inlfunc__aten_gelu_approximate_none|folded_11_n7Mul_inlfunc__aten_gelu_approximate_none|folded_11_n10float32A = 0.5Abs_inlfunc__aten_linalg_vector_norm_onnx|folded_11_n4ReduceL2_inlfunc__aten_linalg_vector_norm_onnx|folded_11_n8_n1_n3_n3_n3_n0int64[2]axes〈2〉ReduceMean_inlfunc_timm_layers_grn_GlobalResponseNorm_getattr_getattr_L__self___stages___2___blocks___5___mlp_grn_1_ReduceMean_9int64[1]axes〈1〉Add_inlfunc_timm_layers_grn_GlobalResponseNorm_getattr_getattr_L__self___stages___2___blocks___5___mlp_grn_1_Add_11float32B = 9.99999997…Div_inlfunc_timm_layers_grn_GlobalResponseNorm_getattr_getattr_L__self___stages___2___blocks___5___mlp_grn_1_aten_div_12_n0Mul_inlfunc_timm_layers_grn_GlobalResponseNorm_getattr_getattr_L__self___stages___2___blocks___5___mlp_grn_1_Mul_19Mul_inlfunc_aten_addcmul|folded_11_n3float32[1,1,1,1536]A〈1×1×1×1536〉Add_inlfunc_aten_addcmul|folded_11_n4float32[1,1,1,1536]A〈1×1×1×1536〉Add_inlfunc_timm_layers_grn_GlobalResponseNorm_getattr_getattr_L__self___stages___2___blocks___5___mlp_grn_1_Add_21Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___5___mlp_fc2_1_Reshape_23int64[4]shape〈4〉ReduceMin_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___5___mlp_fc2_1_aten_amin_25_n0int64[1]axes〈1〉ReduceMax_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___5___mlp_fc2_1_aten_amax_27_n0int64[1]axes〈1〉Min_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___5___mlp_fc2_1_aten_minimum_32_n0〈…〉Max_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___5___mlp_fc2_1_aten_maximum_37_n0〈…〉Neg_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___5___mlp_fc2_1_aten_neg_38_n0Max_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___5___mlp_fc2_1_aten_maximum_39_n0Div_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___5___mlp_fc2_1_aten_div_41_n0float32B = 127Max_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___5___mlp_fc2_1_Max_48〈…〉Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___5___mlp_fc2_1_Reshape_54int64[4]shape〈4〉Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___5___mlp_fc2_1_Reshape_80int64[2]shape〈2〉Reciprocal_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___5___mlp_fc2_1_aten_reciprocal_58_n0Expand_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___5___mlp_fc2_1_aten_expand_82_n2int64[2]shape〈2〉Mul_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___5___mlp_fc2_1_Mul_61Round_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___5___mlp_fc2_1_Round_62Add_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___5___mlp_fc2_1_Add_63float32[1,18,18,1]B〈1×18×18×1〉Max_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___5___mlp_fc2_1_Max_68〈…〉Min_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___5___mlp_fc2_1_Min_69〈…〉Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___5___mlp_fc2_1_Reshape_72int64[4]shape〈4〉Cast_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___5___mlp_fc2_1_Cast_73Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___5___mlp_fc2_1_Reshape_77int64[2]shape〈2〉Cast_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___5___mlp_fc2_1_Cast_83DynamicQuantizeLinear_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___5___mlp_fc2_1__to_copy_93_QuantizeLinearMul_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___5___mlp_fc2_1_MatMul_85_quant_scales_mulfloat32B = 1MatMulInteger_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___5___mlp_fc2_1_MatMul_85_quantuint8[1536,384]B〈1536×384〉uint8b_zero_point = 128Cast_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___5___mlp_fc2_1_mm_23_output_quantized_castMul_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___5___mlp_fc2_1_MatMul_85_quant_output_scale_mulCast_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___5___mlp_fc2_1_Cast_86Cast_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___5___mlp_fc2_1_Cast_87Mul_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___5___mlp_fc2_1_Mul_88Mul_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___5___mlp_fc2_1_Mul_89float32[384]B〈384〉Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___5___mlp_fc2_1_Reshape_92int64[4]shape〈4〉Add_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___5___mlp_fc2_1_Add_93float32[384]B〈384〉Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___5___mlp_fc2_1_Reshape_96int64[2]shape〈2〉Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___5___mlp_fc2_1_Reshape_99int64[4]shape〈4〉Transpose_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___2___blocks_5_1_Transpose_4Add_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___2___blocks_5_1_Add_5DynamicQuantizeLinear_inlfunc_torch_nn_modules_container_Sequential_getattr_L__self___stages___2___blocks_1_getattr_l__self___stages___2___blocks_5_1_QuantizeLinearMul_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___2___blocks_6_1_torch_nn_modules_conv_Conv2d_getattr_getattr_L__self___stages___2___blocks___6___conv_dw_1_0_Conv_0_quant_scales_mulfloat32B = 0.00422685…ConvInteger_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___2___blocks_6_1_torch_nn_modules_conv_Conv2d_getattr_getattr_L__self___stages___2___blocks___6___conv_dw_1_0_Conv_0_quantuint8[384,1,7,7]w〈384×1×7×7〉uint8w_zero_point = 99Cast_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___2___blocks_6_1_getattr_getattr_l__self___stages___2___blocks___6___conv_dw_1_output_quantized_castMul_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___2___blocks_6_1_torch_nn_modules_conv_Conv2d_getattr_getattr_L__self___stages___2___blocks___6___conv_dw_1_0_Conv_0_quant_output_scale_mulAdd_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___2___blocks_6_1_getattr_getattr_l__self___stages___2___blocks___6___conv_dw_1_bias_addTranspose_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___2___blocks_6_1_Transpose_1LayerNormalization_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___2___blocks_6_1_timm_layers_norm_LayerNorm_getattr_getattr_L__self___stages___2___blocks___6___norm_1_2_LayerNormalization_0float32[384]Scale〈384〉float32[384]B〈384〉Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___6___mlp_fc1_1_Reshape_23int64[4]shape〈4〉ReduceMin_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___6___mlp_fc1_1_aten_amin_25_n0int64[1]axes〈1〉ReduceMax_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___6___mlp_fc1_1_aten_amax_27_n0int64[1]axes〈1〉Min_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___6___mlp_fc1_1_aten_minimum_32_n0〈…〉Max_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___6___mlp_fc1_1_aten_maximum_37_n0〈…〉Neg_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___6___mlp_fc1_1_aten_neg_38_n0Max_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___6___mlp_fc1_1_aten_maximum_39_n0Div_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___6___mlp_fc1_1_aten_div_41_n0float32B = 127Max_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___6___mlp_fc1_1_Max_48〈…〉Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___6___mlp_fc1_1_Reshape_54int64[4]shape〈4〉Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___6___mlp_fc1_1_Reshape_80int64[2]shape〈2〉Reciprocal_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___6___mlp_fc1_1_aten_reciprocal_58_n0Expand_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___6___mlp_fc1_1_aten_expand_82_n2int64[2]shape〈2〉Mul_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___6___mlp_fc1_1_Mul_61Round_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___6___mlp_fc1_1_Round_62Add_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___6___mlp_fc1_1_Add_63float32[1,18,18,1]B〈1×18×18×1〉Max_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___6___mlp_fc1_1_Max_68〈…〉Min_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___6___mlp_fc1_1_Min_69〈…〉Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___6___mlp_fc1_1_Reshape_72int64[4]shape〈4〉Cast_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___6___mlp_fc1_1_Cast_73Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___6___mlp_fc1_1_Reshape_77int64[2]shape〈2〉Cast_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___6___mlp_fc1_1_Cast_83DynamicQuantizeLinear_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___6___mlp_fc1_1__to_copy_97_QuantizeLinearMul_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___6___mlp_fc1_1_MatMul_85_quant_scales_mulfloat32B = 1MatMulInteger_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___6___mlp_fc1_1_MatMul_85_quantuint8[384,1536]B〈384×1536〉uint8b_zero_point = 128Cast_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___6___mlp_fc1_1_mm_24_output_quantized_castMul_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___6___mlp_fc1_1_MatMul_85_quant_output_scale_mulCast_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___6___mlp_fc1_1_Cast_86Cast_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___6___mlp_fc1_1_Cast_87Mul_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___6___mlp_fc1_1_Mul_88Mul_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___6___mlp_fc1_1_Mul_89float32[1536]B〈1536〉Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___6___mlp_fc1_1_Reshape_92int64[4]shape〈4〉Add_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___6___mlp_fc1_1_Add_93float32[1536]B〈1536〉Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___6___mlp_fc1_1_Reshape_96int64[2]shape〈2〉Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___6___mlp_fc1_1_Reshape_99int64[4]shape〈4〉Div_inlfunc__aten_gelu_approximate_none|folded_12_n2float32B = 1.41421353…Erf_inlfunc__aten_gelu_approximate_none|folded_12_n3Add_inlfunc__aten_gelu_approximate_none|folded_12_n6float32B = 1Mul_inlfunc__aten_gelu_approximate_none|folded_12_n7Mul_inlfunc__aten_gelu_approximate_none|folded_12_n10float32A = 0.5Abs_inlfunc__aten_linalg_vector_norm_onnx|folded_12_n4ReduceL2_inlfunc__aten_linalg_vector_norm_onnx|folded_12_n8_n1_n3_n3_n3_n0int64[2]axes〈2〉ReduceMean_inlfunc_timm_layers_grn_GlobalResponseNorm_getattr_getattr_L__self___stages___2___blocks___6___mlp_grn_1_ReduceMean_9int64[1]axes〈1〉Add_inlfunc_timm_layers_grn_GlobalResponseNorm_getattr_getattr_L__self___stages___2___blocks___6___mlp_grn_1_Add_11float32B = 9.99999997…Div_inlfunc_timm_layers_grn_GlobalResponseNorm_getattr_getattr_L__self___stages___2___blocks___6___mlp_grn_1_aten_div_12_n0Mul_inlfunc_timm_layers_grn_GlobalResponseNorm_getattr_getattr_L__self___stages___2___blocks___6___mlp_grn_1_Mul_19Mul_inlfunc_aten_addcmul|folded_12_n3float32[1,1,1,1536]A〈1×1×1×1536〉Add_inlfunc_aten_addcmul|folded_12_n4float32[1,1,1,1536]A〈1×1×1×1536〉Add_inlfunc_timm_layers_grn_GlobalResponseNorm_getattr_getattr_L__self___stages___2___blocks___6___mlp_grn_1_Add_21Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___6___mlp_fc2_1_Reshape_23int64[4]shape〈4〉ReduceMin_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___6___mlp_fc2_1_aten_amin_25_n0int64[1]axes〈1〉ReduceMax_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___6___mlp_fc2_1_aten_amax_27_n0int64[1]axes〈1〉Min_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___6___mlp_fc2_1_aten_minimum_32_n0〈…〉Max_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___6___mlp_fc2_1_aten_maximum_37_n0〈…〉Neg_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___6___mlp_fc2_1_aten_neg_38_n0Max_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___6___mlp_fc2_1_aten_maximum_39_n0Div_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___6___mlp_fc2_1_aten_div_41_n0float32B = 127Max_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___6___mlp_fc2_1_Max_48〈…〉Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___6___mlp_fc2_1_Reshape_54int64[4]shape〈4〉Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___6___mlp_fc2_1_Reshape_80int64[2]shape〈2〉Reciprocal_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___6___mlp_fc2_1_aten_reciprocal_58_n0Expand_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___6___mlp_fc2_1_aten_expand_82_n2int64[2]shape〈2〉Mul_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___6___mlp_fc2_1_Mul_61Round_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___6___mlp_fc2_1_Round_62Add_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___6___mlp_fc2_1_Add_63float32[1,18,18,1]B〈1×18×18×1〉Max_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___6___mlp_fc2_1_Max_68〈…〉Min_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___6___mlp_fc2_1_Min_69〈…〉Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___6___mlp_fc2_1_Reshape_72int64[4]shape〈4〉Cast_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___6___mlp_fc2_1_Cast_73Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___6___mlp_fc2_1_Reshape_77int64[2]shape〈2〉Cast_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___6___mlp_fc2_1_Cast_83DynamicQuantizeLinear_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___6___mlp_fc2_1__to_copy_101_QuantizeLinearMul_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___6___mlp_fc2_1_MatMul_85_quant_scales_mulfloat32B = 1MatMulInteger_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___6___mlp_fc2_1_MatMul_85_quantuint8[1536,384]B〈1536×384〉uint8b_zero_point = 128Cast_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___6___mlp_fc2_1_mm_25_output_quantized_castMul_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___6___mlp_fc2_1_MatMul_85_quant_output_scale_mulCast_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___6___mlp_fc2_1_Cast_86Cast_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___6___mlp_fc2_1_Cast_87Mul_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___6___mlp_fc2_1_Mul_88Mul_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___6___mlp_fc2_1_Mul_89float32[384]B〈384〉Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___6___mlp_fc2_1_Reshape_92int64[4]shape〈4〉Add_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___6___mlp_fc2_1_Add_93float32[384]B〈384〉Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___6___mlp_fc2_1_Reshape_96int64[2]shape〈2〉Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___6___mlp_fc2_1_Reshape_99int64[4]shape〈4〉Transpose_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___2___blocks_6_1_Transpose_4Add_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___2___blocks_6_1_Add_5DynamicQuantizeLinear_inlfunc_torch_nn_modules_container_Sequential_getattr_L__self___stages___2___blocks_1_getattr_l__self___stages___2___blocks_6_1_QuantizeLinearMul_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___2___blocks_7_1_torch_nn_modules_conv_Conv2d_getattr_getattr_L__self___stages___2___blocks___7___conv_dw_1_0_Conv_0_quant_scales_mulfloat32B = 0.00367002…ConvInteger_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___2___blocks_7_1_torch_nn_modules_conv_Conv2d_getattr_getattr_L__self___stages___2___blocks___7___conv_dw_1_0_Conv_0_quantuint8[384,1,7,7]w〈384×1×7×7〉uint8w_zero_point = 128Cast_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___2___blocks_7_1_getattr_getattr_l__self___stages___2___blocks___7___conv_dw_1_output_quantized_castMul_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___2___blocks_7_1_torch_nn_modules_conv_Conv2d_getattr_getattr_L__self___stages___2___blocks___7___conv_dw_1_0_Conv_0_quant_output_scale_mulAdd_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___2___blocks_7_1_getattr_getattr_l__self___stages___2___blocks___7___conv_dw_1_bias_addTranspose_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___2___blocks_7_1_Transpose_1LayerNormalization_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___2___blocks_7_1_timm_layers_norm_LayerNorm_getattr_getattr_L__self___stages___2___blocks___7___norm_1_2_LayerNormalization_0float32[384]Scale〈384〉float32[384]B〈384〉Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___7___mlp_fc1_1_Reshape_23int64[4]shape〈4〉ReduceMin_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___7___mlp_fc1_1_aten_amin_25_n0int64[1]axes〈1〉ReduceMax_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___7___mlp_fc1_1_aten_amax_27_n0int64[1]axes〈1〉Min_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___7___mlp_fc1_1_aten_minimum_32_n0〈…〉Max_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___7___mlp_fc1_1_aten_maximum_37_n0〈…〉Neg_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___7___mlp_fc1_1_aten_neg_38_n0Max_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___7___mlp_fc1_1_aten_maximum_39_n0Div_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___7___mlp_fc1_1_aten_div_41_n0float32B = 127Max_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___7___mlp_fc1_1_Max_48〈…〉Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___7___mlp_fc1_1_Reshape_54int64[4]shape〈4〉Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___7___mlp_fc1_1_Reshape_80int64[2]shape〈2〉Reciprocal_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___7___mlp_fc1_1_aten_reciprocal_58_n0Expand_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___7___mlp_fc1_1_aten_expand_82_n2int64[2]shape〈2〉Mul_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___7___mlp_fc1_1_Mul_61Round_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___7___mlp_fc1_1_Round_62Add_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___7___mlp_fc1_1_Add_63float32[1,18,18,1]B〈1×18×18×1〉Max_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___7___mlp_fc1_1_Max_68〈…〉Min_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___7___mlp_fc1_1_Min_69〈…〉Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___7___mlp_fc1_1_Reshape_72int64[4]shape〈4〉Cast_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___7___mlp_fc1_1_Cast_73Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___7___mlp_fc1_1_Reshape_77int64[2]shape〈2〉Cast_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___7___mlp_fc1_1_Cast_83DynamicQuantizeLinear_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___7___mlp_fc1_1__to_copy_105_QuantizeLinearMul_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___7___mlp_fc1_1_MatMul_85_quant_scales_mulfloat32B = 1MatMulInteger_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___7___mlp_fc1_1_MatMul_85_quantuint8[384,1536]B〈384×1536〉uint8b_zero_point = 128Cast_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___7___mlp_fc1_1_mm_26_output_quantized_castMul_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___7___mlp_fc1_1_MatMul_85_quant_output_scale_mulCast_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___7___mlp_fc1_1_Cast_86Cast_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___7___mlp_fc1_1_Cast_87Mul_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___7___mlp_fc1_1_Mul_88Mul_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___7___mlp_fc1_1_Mul_89float32[1536]B〈1536〉Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___7___mlp_fc1_1_Reshape_92int64[4]shape〈4〉Add_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___7___mlp_fc1_1_Add_93float32[1536]B〈1536〉Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___7___mlp_fc1_1_Reshape_96int64[2]shape〈2〉Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___7___mlp_fc1_1_Reshape_99int64[4]shape〈4〉Div_inlfunc__aten_gelu_approximate_none|folded_13_n2float32B = 1.41421353…Erf_inlfunc__aten_gelu_approximate_none|folded_13_n3Add_inlfunc__aten_gelu_approximate_none|folded_13_n6float32B = 1Mul_inlfunc__aten_gelu_approximate_none|folded_13_n7Mul_inlfunc__aten_gelu_approximate_none|folded_13_n10float32A = 0.5Abs_inlfunc__aten_linalg_vector_norm_onnx|folded_13_n4ReduceL2_inlfunc__aten_linalg_vector_norm_onnx|folded_13_n8_n1_n3_n3_n3_n0int64[2]axes〈2〉ReduceMean_inlfunc_timm_layers_grn_GlobalResponseNorm_getattr_getattr_L__self___stages___2___blocks___7___mlp_grn_1_ReduceMean_9int64[1]axes〈1〉Add_inlfunc_timm_layers_grn_GlobalResponseNorm_getattr_getattr_L__self___stages___2___blocks___7___mlp_grn_1_Add_11float32B = 9.99999997…Div_inlfunc_timm_layers_grn_GlobalResponseNorm_getattr_getattr_L__self___stages___2___blocks___7___mlp_grn_1_aten_div_12_n0Mul_inlfunc_timm_layers_grn_GlobalResponseNorm_getattr_getattr_L__self___stages___2___blocks___7___mlp_grn_1_Mul_19Mul_inlfunc_aten_addcmul|folded_13_n3float32[1,1,1,1536]A〈1×1×1×1536〉Add_inlfunc_aten_addcmul|folded_13_n4float32[1,1,1,1536]A〈1×1×1×1536〉Add_inlfunc_timm_layers_grn_GlobalResponseNorm_getattr_getattr_L__self___stages___2___blocks___7___mlp_grn_1_Add_21Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___7___mlp_fc2_1_Reshape_23int64[4]shape〈4〉ReduceMin_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___7___mlp_fc2_1_aten_amin_25_n0int64[1]axes〈1〉ReduceMax_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___7___mlp_fc2_1_aten_amax_27_n0int64[1]axes〈1〉Min_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___7___mlp_fc2_1_aten_minimum_32_n0〈…〉Max_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___7___mlp_fc2_1_aten_maximum_37_n0〈…〉Neg_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___7___mlp_fc2_1_aten_neg_38_n0Max_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___7___mlp_fc2_1_aten_maximum_39_n0Div_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___7___mlp_fc2_1_aten_div_41_n0float32B = 127Max_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___7___mlp_fc2_1_Max_48〈…〉Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___7___mlp_fc2_1_Reshape_54int64[4]shape〈4〉Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___7___mlp_fc2_1_Reshape_80int64[2]shape〈2〉Reciprocal_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___7___mlp_fc2_1_aten_reciprocal_58_n0Expand_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___7___mlp_fc2_1_aten_expand_82_n2int64[2]shape〈2〉Mul_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___7___mlp_fc2_1_Mul_61Round_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___7___mlp_fc2_1_Round_62Add_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___7___mlp_fc2_1_Add_63float32[1,18,18,1]B〈1×18×18×1〉Max_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___7___mlp_fc2_1_Max_68〈…〉Min_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___7___mlp_fc2_1_Min_69〈…〉Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___7___mlp_fc2_1_Reshape_72int64[4]shape〈4〉Cast_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___7___mlp_fc2_1_Cast_73Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___7___mlp_fc2_1_Reshape_77int64[2]shape〈2〉Cast_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___7___mlp_fc2_1_Cast_83DynamicQuantizeLinear_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___7___mlp_fc2_1__to_copy_109_QuantizeLinearMul_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___7___mlp_fc2_1_MatMul_85_quant_scales_mulfloat32B = 1MatMulInteger_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___7___mlp_fc2_1_MatMul_85_quantuint8[1536,384]B〈1536×384〉uint8b_zero_point = 128Cast_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___7___mlp_fc2_1_mm_27_output_quantized_castMul_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___7___mlp_fc2_1_MatMul_85_quant_output_scale_mulCast_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___7___mlp_fc2_1_Cast_86Cast_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___7___mlp_fc2_1_Cast_87Mul_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___7___mlp_fc2_1_Mul_88Mul_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___7___mlp_fc2_1_Mul_89float32[384]B〈384〉Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___7___mlp_fc2_1_Reshape_92int64[4]shape〈4〉Add_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___7___mlp_fc2_1_Add_93float32[384]B〈384〉Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___7___mlp_fc2_1_Reshape_96int64[2]shape〈2〉Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___7___mlp_fc2_1_Reshape_99int64[4]shape〈4〉Transpose_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___2___blocks_7_1_Transpose_4Add_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___2___blocks_7_1_Add_5DynamicQuantizeLinear_inlfunc_torch_nn_modules_container_Sequential_getattr_L__self___stages___2___blocks_1_getattr_l__self___stages___2___blocks_7_1_QuantizeLinearMul_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___2___blocks_8_1_torch_nn_modules_conv_Conv2d_getattr_getattr_L__self___stages___2___blocks___8___conv_dw_1_0_Conv_0_quant_scales_mulfloat32B = 0.00341320…ConvInteger_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___2___blocks_8_1_torch_nn_modules_conv_Conv2d_getattr_getattr_L__self___stages___2___blocks___8___conv_dw_1_0_Conv_0_quantuint8[384,1,7,7]w〈384×1×7×7〉uint8w_zero_point = 134Cast_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___2___blocks_8_1_getattr_getattr_l__self___stages___2___blocks___8___conv_dw_1_output_quantized_castMul_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___2___blocks_8_1_torch_nn_modules_conv_Conv2d_getattr_getattr_L__self___stages___2___blocks___8___conv_dw_1_0_Conv_0_quant_output_scale_mulAdd_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___2___blocks_8_1_getattr_getattr_l__self___stages___2___blocks___8___conv_dw_1_bias_addTranspose_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___2___blocks_8_1_Transpose_1LayerNormalization_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___2___blocks_8_1_timm_layers_norm_LayerNorm_getattr_getattr_L__self___stages___2___blocks___8___norm_1_2_LayerNormalization_0float32[384]Scale〈384〉float32[384]B〈384〉Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___8___mlp_fc1_1_Reshape_23int64[4]shape〈4〉ReduceMin_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___8___mlp_fc1_1_aten_amin_25_n0int64[1]axes〈1〉ReduceMax_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___8___mlp_fc1_1_aten_amax_27_n0int64[1]axes〈1〉Min_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___8___mlp_fc1_1_aten_minimum_32_n0〈…〉Max_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___8___mlp_fc1_1_aten_maximum_37_n0〈…〉Neg_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___8___mlp_fc1_1_aten_neg_38_n0Max_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___8___mlp_fc1_1_aten_maximum_39_n0Div_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___8___mlp_fc1_1_aten_div_41_n0float32B = 127Max_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___8___mlp_fc1_1_Max_48〈…〉Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___8___mlp_fc1_1_Reshape_54int64[4]shape〈4〉Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___8___mlp_fc1_1_Reshape_80int64[2]shape〈2〉Reciprocal_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___8___mlp_fc1_1_aten_reciprocal_58_n0Expand_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___8___mlp_fc1_1_aten_expand_82_n2int64[2]shape〈2〉Mul_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___8___mlp_fc1_1_Mul_61Round_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___8___mlp_fc1_1_Round_62Add_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___8___mlp_fc1_1_Add_63float32[1,18,18,1]B〈1×18×18×1〉Max_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___8___mlp_fc1_1_Max_68〈…〉Min_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___8___mlp_fc1_1_Min_69〈…〉Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___8___mlp_fc1_1_Reshape_72int64[4]shape〈4〉Cast_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___8___mlp_fc1_1_Cast_73Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___8___mlp_fc1_1_Reshape_77int64[2]shape〈2〉Cast_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___8___mlp_fc1_1_Cast_83DynamicQuantizeLinear_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___8___mlp_fc1_1__to_copy_113_QuantizeLinearMul_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___8___mlp_fc1_1_MatMul_85_quant_scales_mulfloat32B = 1MatMulInteger_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___8___mlp_fc1_1_MatMul_85_quantuint8[384,1536]B〈384×1536〉uint8b_zero_point = 128Cast_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___8___mlp_fc1_1_mm_28_output_quantized_castMul_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___8___mlp_fc1_1_MatMul_85_quant_output_scale_mulCast_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___8___mlp_fc1_1_Cast_86Cast_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___8___mlp_fc1_1_Cast_87Mul_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___8___mlp_fc1_1_Mul_88Mul_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___8___mlp_fc1_1_Mul_89float32[1536]B〈1536〉Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___8___mlp_fc1_1_Reshape_92int64[4]shape〈4〉Add_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___8___mlp_fc1_1_Add_93float32[1536]B〈1536〉Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___8___mlp_fc1_1_Reshape_96int64[2]shape〈2〉Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___8___mlp_fc1_1_Reshape_99int64[4]shape〈4〉Div_inlfunc__aten_gelu_approximate_none|folded_14_n2float32B = 1.41421353…Erf_inlfunc__aten_gelu_approximate_none|folded_14_n3Add_inlfunc__aten_gelu_approximate_none|folded_14_n6float32B = 1Mul_inlfunc__aten_gelu_approximate_none|folded_14_n7Mul_inlfunc__aten_gelu_approximate_none|folded_14_n10float32A = 0.5Abs_inlfunc__aten_linalg_vector_norm_onnx|folded_14_n4ReduceL2_inlfunc__aten_linalg_vector_norm_onnx|folded_14_n8_n1_n3_n3_n3_n0int64[2]axes〈2〉ReduceMean_inlfunc_timm_layers_grn_GlobalResponseNorm_getattr_getattr_L__self___stages___2___blocks___8___mlp_grn_1_ReduceMean_9int64[1]axes〈1〉Add_inlfunc_timm_layers_grn_GlobalResponseNorm_getattr_getattr_L__self___stages___2___blocks___8___mlp_grn_1_Add_11float32B = 9.99999997…Div_inlfunc_timm_layers_grn_GlobalResponseNorm_getattr_getattr_L__self___stages___2___blocks___8___mlp_grn_1_aten_div_12_n0Mul_inlfunc_timm_layers_grn_GlobalResponseNorm_getattr_getattr_L__self___stages___2___blocks___8___mlp_grn_1_Mul_19Mul_inlfunc_aten_addcmul|folded_14_n3float32[1,1,1,1536]A〈1×1×1×1536〉Add_inlfunc_aten_addcmul|folded_14_n4float32[1,1,1,1536]A〈1×1×1×1536〉Add_inlfunc_timm_layers_grn_GlobalResponseNorm_getattr_getattr_L__self___stages___2___blocks___8___mlp_grn_1_Add_21Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___8___mlp_fc2_1_Reshape_23int64[4]shape〈4〉ReduceMin_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___8___mlp_fc2_1_aten_amin_25_n0int64[1]axes〈1〉ReduceMax_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___8___mlp_fc2_1_aten_amax_27_n0int64[1]axes〈1〉Min_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___8___mlp_fc2_1_aten_minimum_32_n0〈…〉Max_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___8___mlp_fc2_1_aten_maximum_37_n0〈…〉Neg_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___8___mlp_fc2_1_aten_neg_38_n0Max_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___8___mlp_fc2_1_aten_maximum_39_n0Div_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___8___mlp_fc2_1_aten_div_41_n0float32B = 127Max_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___8___mlp_fc2_1_Max_48〈…〉Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___8___mlp_fc2_1_Reshape_54int64[4]shape〈4〉Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___8___mlp_fc2_1_Reshape_80int64[2]shape〈2〉Reciprocal_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___8___mlp_fc2_1_aten_reciprocal_58_n0Expand_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___8___mlp_fc2_1_aten_expand_82_n2int64[2]shape〈2〉Mul_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___8___mlp_fc2_1_Mul_61Round_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___8___mlp_fc2_1_Round_62Add_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___8___mlp_fc2_1_Add_63float32[1,18,18,1]B〈1×18×18×1〉Max_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___8___mlp_fc2_1_Max_68〈…〉Min_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___8___mlp_fc2_1_Min_69〈…〉Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___8___mlp_fc2_1_Reshape_72int64[4]shape〈4〉Cast_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___8___mlp_fc2_1_Cast_73Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___8___mlp_fc2_1_Reshape_77int64[2]shape〈2〉Cast_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___8___mlp_fc2_1_Cast_83DynamicQuantizeLinear_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___8___mlp_fc2_1__to_copy_117_QuantizeLinearMul_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___8___mlp_fc2_1_MatMul_85_quant_scales_mulfloat32B = 1MatMulInteger_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___8___mlp_fc2_1_MatMul_85_quantuint8[1536,384]B〈1536×384〉uint8b_zero_point = 128Cast_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___8___mlp_fc2_1_mm_29_output_quantized_castMul_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___8___mlp_fc2_1_MatMul_85_quant_output_scale_mulCast_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___8___mlp_fc2_1_Cast_86Cast_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___8___mlp_fc2_1_Cast_87Mul_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___8___mlp_fc2_1_Mul_88Mul_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___8___mlp_fc2_1_Mul_89float32[384]B〈384〉Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___8___mlp_fc2_1_Reshape_92int64[4]shape〈4〉Add_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___8___mlp_fc2_1_Add_93float32[384]B〈384〉Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___8___mlp_fc2_1_Reshape_96int64[2]shape〈2〉Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___8___mlp_fc2_1_Reshape_99int64[4]shape〈4〉Transpose_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___2___blocks_8_1_Transpose_4Add_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___2___blocks_8_1_Add_5Transpose_inlfunc_timm_layers_norm_LayerNorm2d_getattr_L__self___stages___3___downsample_0_1_Transpose_0LayerNormalization_inlfunc_timm_layers_norm_LayerNorm2d_getattr_L__self___stages___3___downsample_0_1_LayerNormalization_1float32[384]Scale〈384〉float32[384]B〈384〉Transpose_inlfunc_timm_layers_norm_LayerNorm2d_getattr_L__self___stages___3___downsample_0_1_Transpose_2DynamicQuantizeLinear_inlfunc_timm_models_convnext_ConvNeXtStage_stages_3_1_torch_nn_modules_container_Sequential_getattr_L__self___stages___3___downsample_1_0_getattr_l__self___stages___3___downsample_0_1_QuantizeLinearMul_inlfunc_timm_models_convnext_ConvNeXtStage_stages_3_1_torch_nn_modules_container_Sequential_getattr_L__self___stages___3___downsample_1_0_torch_nn_modules_conv_Conv2d_getattr_L__self___stages___3___downsample_1_1_1_Conv_0_quant_scales_mulfloat32B = 0.00383704…ConvInteger_inlfunc_timm_models_convnext_ConvNeXtStage_stages_3_1_torch_nn_modules_container_Sequential_getattr_L__self___stages___3___downsample_1_0_torch_nn_modules_conv_Conv2d_getattr_L__self___stages___3___downsample_1_1_1_Conv_0_quantuint8[768,384,2,2]w〈768×384×2×2〉uint8w_zero_point = 130Cast_inlfunc_timm_models_convnext_ConvNeXtStage_stages_3_1_getattr_l__self___stages___3___downsample_1_output_quantized_castMul_inlfunc_timm_models_convnext_ConvNeXtStage_stages_3_1_torch_nn_modules_container_Sequential_getattr_L__self___stages___3___downsample_1_0_torch_nn_modules_conv_Conv2d_getattr_L__self___stages___3___downsample_1_1_1_Conv_0_quant_output_scale_mulAdd_inlfunc_timm_models_convnext_ConvNeXtStage_stages_3_1_getattr_l__self___stages___3___downsample_1_bias_addDynamicQuantizeLinear_inlfunc_timm_models_convnext_ConvNeXtStage_stages_3_1_getattr_l__self___stages___3___downsample_1_QuantizeLinearMul_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___3___blocks_0_1_torch_nn_modules_conv_Conv2d_getattr_getattr_L__self___stages___3___blocks___0___conv_dw_1_0_Conv_0_quant_scales_mulfloat32B = 0.00492629…ConvInteger_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___3___blocks_0_1_torch_nn_modules_conv_Conv2d_getattr_getattr_L__self___stages___3___blocks___0___conv_dw_1_0_Conv_0_quantuint8[768,1,7,7]w〈768×1×7×7〉uint8w_zero_point = 117Cast_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___3___blocks_0_1_getattr_getattr_l__self___stages___3___blocks___0___conv_dw_1_output_quantized_castMul_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___3___blocks_0_1_torch_nn_modules_conv_Conv2d_getattr_getattr_L__self___stages___3___blocks___0___conv_dw_1_0_Conv_0_quant_output_scale_mulAdd_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___3___blocks_0_1_getattr_getattr_l__self___stages___3___blocks___0___conv_dw_1_bias_addTranspose_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___3___blocks_0_1_Transpose_1LayerNormalization_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___3___blocks_0_1_timm_layers_norm_LayerNorm_getattr_getattr_L__self___stages___3___blocks___0___norm_1_2_LayerNormalization_0float32[768]Scale〈768〉float32[768]B〈768〉Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___0___mlp_fc1_1_Reshape_23int64[4]shape〈4〉ReduceMin_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___0___mlp_fc1_1_aten_amin_25_n0int64[1]axes〈1〉ReduceMax_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___0___mlp_fc1_1_aten_amax_27_n0int64[1]axes〈1〉Min_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___0___mlp_fc1_1_aten_minimum_32_n0〈…〉Max_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___0___mlp_fc1_1_aten_maximum_37_n0〈…〉Neg_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___0___mlp_fc1_1_aten_neg_38_n0Max_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___0___mlp_fc1_1_aten_maximum_39_n0Div_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___0___mlp_fc1_1_aten_div_41_n0float32B = 127Max_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___0___mlp_fc1_1_Max_48〈…〉Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___0___mlp_fc1_1_Reshape_54int64[4]shape〈4〉Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___0___mlp_fc1_1_Reshape_80int64[2]shape〈2〉Reciprocal_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___0___mlp_fc1_1_aten_reciprocal_58_n0Expand_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___0___mlp_fc1_1_aten_expand_82_n2int64[2]shape〈2〉Mul_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___0___mlp_fc1_1_Mul_61Round_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___0___mlp_fc1_1_Round_62Add_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___0___mlp_fc1_1_Add_63float32[1,9,9,1]B〈1×9×9×1〉Max_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___0___mlp_fc1_1_Max_68〈…〉Min_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___0___mlp_fc1_1_Min_69〈…〉Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___0___mlp_fc1_1_Reshape_72int64[4]shape〈4〉Cast_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___0___mlp_fc1_1_Cast_73Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___0___mlp_fc1_1_Reshape_77int64[2]shape〈2〉Cast_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___0___mlp_fc1_1_Cast_83DynamicQuantizeLinear_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___0___mlp_fc1_1__to_copy_121_QuantizeLinearMul_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___0___mlp_fc1_1_MatMul_85_quant_scales_mulfloat32B = 1MatMulInteger_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___0___mlp_fc1_1_MatMul_85_quantuint8[768,3072]B〈768×3072〉uint8b_zero_point = 128Cast_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___0___mlp_fc1_1_mm_30_output_quantized_castMul_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___0___mlp_fc1_1_MatMul_85_quant_output_scale_mulCast_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___0___mlp_fc1_1_Cast_86Cast_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___0___mlp_fc1_1_Cast_87Mul_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___0___mlp_fc1_1_Mul_88Mul_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___0___mlp_fc1_1_Mul_89float32[3072]B〈3072〉Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___0___mlp_fc1_1_Reshape_92int64[4]shape〈4〉Add_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___0___mlp_fc1_1_Add_93float32[3072]B〈3072〉Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___0___mlp_fc1_1_Reshape_96int64[2]shape〈2〉Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___0___mlp_fc1_1_Reshape_99int64[4]shape〈4〉Div_inlfunc__aten_gelu_approximate_none|folded_15_n2float32B = 1.41421353…Erf_inlfunc__aten_gelu_approximate_none|folded_15_n3Add_inlfunc__aten_gelu_approximate_none|folded_15_n6float32B = 1Mul_inlfunc__aten_gelu_approximate_none|folded_15_n7Mul_inlfunc__aten_gelu_approximate_none|folded_15_n10float32A = 0.5Abs_inlfunc__aten_linalg_vector_norm_onnx|folded_15_n4ReduceL2_inlfunc__aten_linalg_vector_norm_onnx|folded_15_n8_n1_n3_n3_n3_n0int64[2]axes〈2〉ReduceMean_inlfunc_timm_layers_grn_GlobalResponseNorm_getattr_getattr_L__self___stages___3___blocks___0___mlp_grn_1_ReduceMean_9int64[1]axes〈1〉Add_inlfunc_timm_layers_grn_GlobalResponseNorm_getattr_getattr_L__self___stages___3___blocks___0___mlp_grn_1_Add_11float32B = 9.99999997…Div_inlfunc_timm_layers_grn_GlobalResponseNorm_getattr_getattr_L__self___stages___3___blocks___0___mlp_grn_1_aten_div_12_n0Mul_inlfunc_timm_layers_grn_GlobalResponseNorm_getattr_getattr_L__self___stages___3___blocks___0___mlp_grn_1_Mul_19Mul_inlfunc_aten_addcmul|folded_15_n3float32[1,1,1,3072]A〈1×1×1×3072〉Add_inlfunc_aten_addcmul|folded_15_n4float32[1,1,1,3072]A〈1×1×1×3072〉Add_inlfunc_timm_layers_grn_GlobalResponseNorm_getattr_getattr_L__self___stages___3___blocks___0___mlp_grn_1_Add_21Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___0___mlp_fc2_1_Reshape_23int64[4]shape〈4〉ReduceMin_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___0___mlp_fc2_1_aten_amin_25_n0int64[1]axes〈1〉ReduceMax_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___0___mlp_fc2_1_aten_amax_27_n0int64[1]axes〈1〉Min_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___0___mlp_fc2_1_aten_minimum_32_n0〈…〉Max_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___0___mlp_fc2_1_aten_maximum_37_n0〈…〉Neg_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___0___mlp_fc2_1_aten_neg_38_n0Max_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___0___mlp_fc2_1_aten_maximum_39_n0Div_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___0___mlp_fc2_1_aten_div_41_n0float32B = 127Max_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___0___mlp_fc2_1_Max_48〈…〉Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___0___mlp_fc2_1_Reshape_54int64[4]shape〈4〉Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___0___mlp_fc2_1_Reshape_80int64[2]shape〈2〉Reciprocal_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___0___mlp_fc2_1_aten_reciprocal_58_n0Expand_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___0___mlp_fc2_1_aten_expand_82_n2int64[2]shape〈2〉Mul_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___0___mlp_fc2_1_Mul_61Round_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___0___mlp_fc2_1_Round_62Add_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___0___mlp_fc2_1_Add_63float32[1,9,9,1]B〈1×9×9×1〉Max_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___0___mlp_fc2_1_Max_68〈…〉Min_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___0___mlp_fc2_1_Min_69〈…〉Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___0___mlp_fc2_1_Reshape_72int64[4]shape〈4〉Cast_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___0___mlp_fc2_1_Cast_73Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___0___mlp_fc2_1_Reshape_77int64[2]shape〈2〉Cast_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___0___mlp_fc2_1_Cast_83DynamicQuantizeLinear_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___0___mlp_fc2_1__to_copy_125_QuantizeLinearMul_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___0___mlp_fc2_1_MatMul_85_quant_scales_mulfloat32B = 1MatMulInteger_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___0___mlp_fc2_1_MatMul_85_quantuint8[3072,768]B〈3072×768〉uint8b_zero_point = 128Cast_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___0___mlp_fc2_1_mm_31_output_quantized_castMul_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___0___mlp_fc2_1_MatMul_85_quant_output_scale_mulCast_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___0___mlp_fc2_1_Cast_86Cast_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___0___mlp_fc2_1_Cast_87Mul_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___0___mlp_fc2_1_Mul_88Mul_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___0___mlp_fc2_1_Mul_89float32[768]B〈768〉Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___0___mlp_fc2_1_Reshape_92int64[4]shape〈4〉Add_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___0___mlp_fc2_1_Add_93float32[768]B〈768〉Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___0___mlp_fc2_1_Reshape_96int64[2]shape〈2〉Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___0___mlp_fc2_1_Reshape_99int64[4]shape〈4〉Transpose_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___3___blocks_0_1_Transpose_4Add_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___3___blocks_0_1_Add_5DynamicQuantizeLinear_inlfunc_torch_nn_modules_container_Sequential_getattr_L__self___stages___3___blocks_1_getattr_l__self___stages___3___blocks_0_1_QuantizeLinearMul_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___3___blocks_1_1_torch_nn_modules_conv_Conv2d_getattr_getattr_L__self___stages___3___blocks___1___conv_dw_1_0_Conv_0_quant_scales_mulfloat32B = 0.00551574…ConvInteger_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___3___blocks_1_1_torch_nn_modules_conv_Conv2d_getattr_getattr_L__self___stages___3___blocks___1___conv_dw_1_0_Conv_0_quantuint8[768,1,7,7]w〈768×1×7×7〉uint8w_zero_point = 160Cast_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___3___blocks_1_1_getattr_getattr_l__self___stages___3___blocks___1___conv_dw_1_output_quantized_castMul_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___3___blocks_1_1_torch_nn_modules_conv_Conv2d_getattr_getattr_L__self___stages___3___blocks___1___conv_dw_1_0_Conv_0_quant_output_scale_mulAdd_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___3___blocks_1_1_getattr_getattr_l__self___stages___3___blocks___1___conv_dw_1_bias_addTranspose_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___3___blocks_1_1_Transpose_1LayerNormalization_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___3___blocks_1_1_timm_layers_norm_LayerNorm_getattr_getattr_L__self___stages___3___blocks___1___norm_1_2_LayerNormalization_0float32[768]Scale〈768〉float32[768]B〈768〉Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___1___mlp_fc1_1_Reshape_23int64[4]shape〈4〉ReduceMin_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___1___mlp_fc1_1_aten_amin_25_n0int64[1]axes〈1〉ReduceMax_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___1___mlp_fc1_1_aten_amax_27_n0int64[1]axes〈1〉Min_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___1___mlp_fc1_1_aten_minimum_32_n0〈…〉Max_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___1___mlp_fc1_1_aten_maximum_37_n0〈…〉Neg_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___1___mlp_fc1_1_aten_neg_38_n0Max_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___1___mlp_fc1_1_aten_maximum_39_n0Div_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___1___mlp_fc1_1_aten_div_41_n0float32B = 127Max_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___1___mlp_fc1_1_Max_48〈…〉Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___1___mlp_fc1_1_Reshape_54int64[4]shape〈4〉Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___1___mlp_fc1_1_Reshape_80int64[2]shape〈2〉Reciprocal_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___1___mlp_fc1_1_aten_reciprocal_58_n0Expand_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___1___mlp_fc1_1_aten_expand_82_n2int64[2]shape〈2〉Mul_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___1___mlp_fc1_1_Mul_61Round_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___1___mlp_fc1_1_Round_62Add_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___1___mlp_fc1_1_Add_63float32[1,9,9,1]B〈1×9×9×1〉Max_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___1___mlp_fc1_1_Max_68〈…〉Min_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___1___mlp_fc1_1_Min_69〈…〉Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___1___mlp_fc1_1_Reshape_72int64[4]shape〈4〉Cast_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___1___mlp_fc1_1_Cast_73Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___1___mlp_fc1_1_Reshape_77int64[2]shape〈2〉Cast_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___1___mlp_fc1_1_Cast_83DynamicQuantizeLinear_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___1___mlp_fc1_1__to_copy_129_QuantizeLinearMul_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___1___mlp_fc1_1_MatMul_85_quant_scales_mulfloat32B = 1MatMulInteger_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___1___mlp_fc1_1_MatMul_85_quantuint8[768,3072]B〈768×3072〉uint8b_zero_point = 128Cast_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___1___mlp_fc1_1_mm_32_output_quantized_castMul_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___1___mlp_fc1_1_MatMul_85_quant_output_scale_mulCast_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___1___mlp_fc1_1_Cast_86Cast_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___1___mlp_fc1_1_Cast_87Mul_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___1___mlp_fc1_1_Mul_88Mul_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___1___mlp_fc1_1_Mul_89float32[3072]B〈3072〉Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___1___mlp_fc1_1_Reshape_92int64[4]shape〈4〉Add_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___1___mlp_fc1_1_Add_93float32[3072]B〈3072〉Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___1___mlp_fc1_1_Reshape_96int64[2]shape〈2〉Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___1___mlp_fc1_1_Reshape_99int64[4]shape〈4〉Div_inlfunc__aten_gelu_approximate_none|folded_16_n2float32B = 1.41421353…Erf_inlfunc__aten_gelu_approximate_none|folded_16_n3Add_inlfunc__aten_gelu_approximate_none|folded_16_n6float32B = 1Mul_inlfunc__aten_gelu_approximate_none|folded_16_n7Mul_inlfunc__aten_gelu_approximate_none|folded_16_n10float32A = 0.5Abs_inlfunc__aten_linalg_vector_norm_onnx|folded_16_n4ReduceL2_inlfunc__aten_linalg_vector_norm_onnx|folded_16_n8_n1_n3_n3_n3_n0int64[2]axes〈2〉ReduceMean_inlfunc_timm_layers_grn_GlobalResponseNorm_getattr_getattr_L__self___stages___3___blocks___1___mlp_grn_1_ReduceMean_9int64[1]axes〈1〉Add_inlfunc_timm_layers_grn_GlobalResponseNorm_getattr_getattr_L__self___stages___3___blocks___1___mlp_grn_1_Add_11float32B = 9.99999997…Div_inlfunc_timm_layers_grn_GlobalResponseNorm_getattr_getattr_L__self___stages___3___blocks___1___mlp_grn_1_aten_div_12_n0Mul_inlfunc_timm_layers_grn_GlobalResponseNorm_getattr_getattr_L__self___stages___3___blocks___1___mlp_grn_1_Mul_19Mul_inlfunc_aten_addcmul|folded_16_n3float32[1,1,1,3072]A〈1×1×1×3072〉Add_inlfunc_aten_addcmul|folded_16_n4float32[1,1,1,3072]A〈1×1×1×3072〉Add_inlfunc_timm_layers_grn_GlobalResponseNorm_getattr_getattr_L__self___stages___3___blocks___1___mlp_grn_1_Add_21Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___1___mlp_fc2_1_Reshape_23int64[4]shape〈4〉ReduceMin_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___1___mlp_fc2_1_aten_amin_25_n0int64[1]axes〈1〉ReduceMax_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___1___mlp_fc2_1_aten_amax_27_n0int64[1]axes〈1〉Min_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___1___mlp_fc2_1_aten_minimum_32_n0〈…〉Max_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___1___mlp_fc2_1_aten_maximum_37_n0〈…〉Neg_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___1___mlp_fc2_1_aten_neg_38_n0Max_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___1___mlp_fc2_1_aten_maximum_39_n0Div_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___1___mlp_fc2_1_aten_div_41_n0float32B = 127Max_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___1___mlp_fc2_1_Max_48〈…〉Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___1___mlp_fc2_1_Reshape_54int64[4]shape〈4〉Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___1___mlp_fc2_1_Reshape_80int64[2]shape〈2〉Reciprocal_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___1___mlp_fc2_1_aten_reciprocal_58_n0Expand_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___1___mlp_fc2_1_aten_expand_82_n2int64[2]shape〈2〉Mul_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___1___mlp_fc2_1_Mul_61Round_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___1___mlp_fc2_1_Round_62Add_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___1___mlp_fc2_1_Add_63float32[1,9,9,1]B〈1×9×9×1〉Max_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___1___mlp_fc2_1_Max_68〈…〉Min_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___1___mlp_fc2_1_Min_69〈…〉Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___1___mlp_fc2_1_Reshape_72int64[4]shape〈4〉Cast_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___1___mlp_fc2_1_Cast_73Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___1___mlp_fc2_1_Reshape_77int64[2]shape〈2〉Cast_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___1___mlp_fc2_1_Cast_83DynamicQuantizeLinear_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___1___mlp_fc2_1__to_copy_133_QuantizeLinearMul_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___1___mlp_fc2_1_MatMul_85_quant_scales_mulfloat32B = 1MatMulInteger_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___1___mlp_fc2_1_MatMul_85_quantuint8[3072,768]B〈3072×768〉uint8b_zero_point = 128Cast_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___1___mlp_fc2_1_mm_33_output_quantized_castMul_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___1___mlp_fc2_1_MatMul_85_quant_output_scale_mulCast_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___1___mlp_fc2_1_Cast_86Cast_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___1___mlp_fc2_1_Cast_87Mul_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___1___mlp_fc2_1_Mul_88Mul_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___1___mlp_fc2_1_Mul_89float32[768]B〈768〉Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___1___mlp_fc2_1_Reshape_92int64[4]shape〈4〉Add_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___1___mlp_fc2_1_Add_93float32[768]B〈768〉Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___1___mlp_fc2_1_Reshape_96int64[2]shape〈2〉Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___1___mlp_fc2_1_Reshape_99int64[4]shape〈4〉Transpose_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___3___blocks_1_1_Transpose_4Add_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___3___blocks_1_1_Add_5DynamicQuantizeLinear_inlfunc_torch_nn_modules_container_Sequential_getattr_L__self___stages___3___blocks_1_getattr_l__self___stages___3___blocks_1_1_QuantizeLinearMul_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___3___blocks_2_1_torch_nn_modules_conv_Conv2d_getattr_getattr_L__self___stages___3___blocks___2___conv_dw_1_0_Conv_0_quant_scales_mulfloat32B = 0.00770069…ConvInteger_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___3___blocks_2_1_torch_nn_modules_conv_Conv2d_getattr_getattr_L__self___stages___3___blocks___2___conv_dw_1_0_Conv_0_quantuint8[768,1,7,7]w〈768×1×7×7〉uint8w_zero_point = 175Cast_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___3___blocks_2_1_getattr_getattr_l__self___stages___3___blocks___2___conv_dw_1_output_quantized_castMul_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___3___blocks_2_1_torch_nn_modules_conv_Conv2d_getattr_getattr_L__self___stages___3___blocks___2___conv_dw_1_0_Conv_0_quant_output_scale_mulAdd_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___3___blocks_2_1_getattr_getattr_l__self___stages___3___blocks___2___conv_dw_1_bias_addTranspose_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___3___blocks_2_1_Transpose_1LayerNormalization_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___3___blocks_2_1_timm_layers_norm_LayerNorm_getattr_getattr_L__self___stages___3___blocks___2___norm_1_2_LayerNormalization_0float32[768]Scale〈768〉float32[768]B〈768〉Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___2___mlp_fc1_1_Reshape_23int64[4]shape〈4〉ReduceMin_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___2___mlp_fc1_1_aten_amin_25_n0int64[1]axes〈1〉ReduceMax_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___2___mlp_fc1_1_aten_amax_27_n0int64[1]axes〈1〉Min_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___2___mlp_fc1_1_aten_minimum_32_n0〈…〉Max_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___2___mlp_fc1_1_aten_maximum_37_n0〈…〉Neg_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___2___mlp_fc1_1_aten_neg_38_n0Max_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___2___mlp_fc1_1_aten_maximum_39_n0Div_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___2___mlp_fc1_1_aten_div_41_n0float32B = 127Max_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___2___mlp_fc1_1_Max_48〈…〉Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___2___mlp_fc1_1_Reshape_54int64[4]shape〈4〉Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___2___mlp_fc1_1_Reshape_80int64[2]shape〈2〉Reciprocal_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___2___mlp_fc1_1_aten_reciprocal_58_n0Expand_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___2___mlp_fc1_1_aten_expand_82_n2int64[2]shape〈2〉Mul_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___2___mlp_fc1_1_Mul_61Round_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___2___mlp_fc1_1_Round_62Add_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___2___mlp_fc1_1_Add_63float32[1,9,9,1]B〈1×9×9×1〉Max_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___2___mlp_fc1_1_Max_68〈…〉Min_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___2___mlp_fc1_1_Min_69〈…〉Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___2___mlp_fc1_1_Reshape_72int64[4]shape〈4〉Cast_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___2___mlp_fc1_1_Cast_73Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___2___mlp_fc1_1_Reshape_77int64[2]shape〈2〉Cast_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___2___mlp_fc1_1_Cast_83DynamicQuantizeLinear_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___2___mlp_fc1_1__to_copy_137_QuantizeLinearMul_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___2___mlp_fc1_1_MatMul_85_quant_scales_mulfloat32B = 1MatMulInteger_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___2___mlp_fc1_1_MatMul_85_quantuint8[768,3072]B〈768×3072〉uint8b_zero_point = 128Cast_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___2___mlp_fc1_1_mm_34_output_quantized_castMul_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___2___mlp_fc1_1_MatMul_85_quant_output_scale_mulCast_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___2___mlp_fc1_1_Cast_86Cast_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___2___mlp_fc1_1_Cast_87Mul_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___2___mlp_fc1_1_Mul_88Mul_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___2___mlp_fc1_1_Mul_89float32[3072]B〈3072〉Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___2___mlp_fc1_1_Reshape_92int64[4]shape〈4〉Add_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___2___mlp_fc1_1_Add_93float32[3072]B〈3072〉Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___2___mlp_fc1_1_Reshape_96int64[2]shape〈2〉Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___2___mlp_fc1_1_Reshape_99int64[4]shape〈4〉Div_inlfunc__aten_gelu_approximate_none|folded_17_n2float32B = 1.41421353…Erf_inlfunc__aten_gelu_approximate_none|folded_17_n3Add_inlfunc__aten_gelu_approximate_none|folded_17_n6float32B = 1Mul_inlfunc__aten_gelu_approximate_none|folded_17_n7Mul_inlfunc__aten_gelu_approximate_none|folded_17_n10float32A = 0.5Abs_inlfunc__aten_linalg_vector_norm_onnx|folded_17_n4ReduceL2_inlfunc__aten_linalg_vector_norm_onnx|folded_17_n8_n1_n3_n3_n3_n0int64[2]axes〈2〉ReduceMean_inlfunc_timm_layers_grn_GlobalResponseNorm_getattr_getattr_L__self___stages___3___blocks___2___mlp_grn_1_ReduceMean_9int64[1]axes〈1〉Add_inlfunc_timm_layers_grn_GlobalResponseNorm_getattr_getattr_L__self___stages___3___blocks___2___mlp_grn_1_Add_11float32B = 9.99999997…Div_inlfunc_timm_layers_grn_GlobalResponseNorm_getattr_getattr_L__self___stages___3___blocks___2___mlp_grn_1_aten_div_12_n0Mul_inlfunc_timm_layers_grn_GlobalResponseNorm_getattr_getattr_L__self___stages___3___blocks___2___mlp_grn_1_Mul_19Mul_inlfunc_aten_addcmul|folded_17_n3float32[1,1,1,3072]A〈1×1×1×3072〉Add_inlfunc_aten_addcmul|folded_17_n4float32[1,1,1,3072]A〈1×1×1×3072〉Add_inlfunc_timm_layers_grn_GlobalResponseNorm_getattr_getattr_L__self___stages___3___blocks___2___mlp_grn_1_Add_21Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___2___mlp_fc2_1_Reshape_23int64[4]shape〈4〉ReduceMin_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___2___mlp_fc2_1_aten_amin_25_n0int64[1]axes〈1〉ReduceMax_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___2___mlp_fc2_1_aten_amax_27_n0int64[1]axes〈1〉Min_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___2___mlp_fc2_1_aten_minimum_32_n0〈…〉Max_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___2___mlp_fc2_1_aten_maximum_37_n0〈…〉Neg_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___2___mlp_fc2_1_aten_neg_38_n0Max_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___2___mlp_fc2_1_aten_maximum_39_n0Div_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___2___mlp_fc2_1_aten_div_41_n0float32B = 127Max_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___2___mlp_fc2_1_Max_48〈…〉Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___2___mlp_fc2_1_Reshape_54int64[4]shape〈4〉Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___2___mlp_fc2_1_Reshape_80int64[2]shape〈2〉Reciprocal_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___2___mlp_fc2_1_aten_reciprocal_58_n0Expand_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___2___mlp_fc2_1_aten_expand_82_n2int64[2]shape〈2〉Mul_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___2___mlp_fc2_1_Mul_61Round_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___2___mlp_fc2_1_Round_62Add_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___2___mlp_fc2_1_Add_63float32[1,9,9,1]B〈1×9×9×1〉Max_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___2___mlp_fc2_1_Max_68〈…〉Min_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___2___mlp_fc2_1_Min_69〈…〉Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___2___mlp_fc2_1_Reshape_72int64[4]shape〈4〉Cast_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___2___mlp_fc2_1_Cast_73Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___2___mlp_fc2_1_Reshape_77int64[2]shape〈2〉Cast_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___2___mlp_fc2_1_Cast_83DynamicQuantizeLinear_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___2___mlp_fc2_1__to_copy_141_QuantizeLinearMul_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___2___mlp_fc2_1_MatMul_85_quant_scales_mulfloat32B = 1MatMulInteger_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___2___mlp_fc2_1_MatMul_85_quantuint8[3072,768]B〈3072×768〉uint8b_zero_point = 128Cast_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___2___mlp_fc2_1_mm_35_output_quantized_castMul_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___2___mlp_fc2_1_MatMul_85_quant_output_scale_mulCast_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___2___mlp_fc2_1_Cast_86Cast_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___2___mlp_fc2_1_Cast_87Mul_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___2___mlp_fc2_1_Mul_88Mul_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___2___mlp_fc2_1_Mul_89float32[768]B〈768〉Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___2___mlp_fc2_1_Reshape_92int64[4]shape〈4〉Add_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___2___mlp_fc2_1_Add_93float32[768]B〈768〉Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___2___mlp_fc2_1_Reshape_96int64[2]shape〈2〉Reshape_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___2___mlp_fc2_1_Reshape_99int64[4]shape〈4〉Transpose_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___3___blocks_2_1_Transpose_4Add_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___3___blocks_2_1_Add_5ReduceMean_inlfunc_torch_nn_modules_pooling_AdaptiveAvgPool2d_head_global_pool_pool_1_ReduceMean_5int64[2]axes〈2〉Reshape_inlfunc__aten_as_strided_onnx_n8int64[1]shape〈1〉Gather_inlfunc__aten_as_strided_onnx_n12Transpose_inlfunc_timm_layers_norm_LayerNorm2d_head_norm_1_Transpose_0LayerNormalization_inlfunc_timm_layers_norm_LayerNorm2d_head_norm_1_LayerNormalization_1float32[768]Scale〈768〉float32[768]B〈768〉Transpose_inlfunc_timm_layers_norm_LayerNorm2d_head_norm_1_Transpose_2Reshape_inlfunc_timm_layers_classifier_NormMlpClassifierHead_head_1_torch_nn_modules_flatten_Flatten_head_flatten_1_2_Reshape_3int64[2]shape〈2〉Reshape_inlfunc_torch_nn_modules_linear_Linear_head_fc_1_Reshape_23int64[2]shape〈2〉ReduceMin_inlfunc_torch_nn_modules_linear_Linear_head_fc_1_aten_amin_25_n0int64[1]axes〈1〉ReduceMax_inlfunc_torch_nn_modules_linear_Linear_head_fc_1_aten_amax_27_n0int64[1]axes〈1〉Min_inlfunc_torch_nn_modules_linear_Linear_head_fc_1_aten_minimum_32_n0〈…〉Max_inlfunc_torch_nn_modules_linear_Linear_head_fc_1_aten_maximum_37_n0〈…〉Neg_inlfunc_torch_nn_modules_linear_Linear_head_fc_1_aten_neg_38_n0Max_inlfunc_torch_nn_modules_linear_Linear_head_fc_1_aten_maximum_39_n0Div_inlfunc_torch_nn_modules_linear_Linear_head_fc_1_aten_div_41_n0float32B = 127Max_inlfunc_torch_nn_modules_linear_Linear_head_fc_1_Max_48〈…〉Reshape_inlfunc_torch_nn_modules_linear_Linear_head_fc_1_Reshape_54int64[2]shape〈2〉Reshape_inlfunc_torch_nn_modules_linear_Linear_head_fc_1_Reshape_80int64[2]shape〈2〉Reciprocal_inlfunc_torch_nn_modules_linear_Linear_head_fc_1_aten_reciprocal_58_n0Expand_inlfunc_torch_nn_modules_linear_Linear_head_fc_1_aten_expand_82_n2int64[2]shape〈2〉Mul_inlfunc_torch_nn_modules_linear_Linear_head_fc_1_Mul_61Round_inlfunc_torch_nn_modules_linear_Linear_head_fc_1_Round_62Max_inlfunc_torch_nn_modules_linear_Linear_head_fc_1_Max_68〈…〉Min_inlfunc_torch_nn_modules_linear_Linear_head_fc_1_Min_69〈…〉Reshape_inlfunc_torch_nn_modules_linear_Linear_head_fc_1_Reshape_72int64[2]shape〈2〉Cast_inlfunc_torch_nn_modules_linear_Linear_head_fc_1_Cast_73Reshape_inlfunc_torch_nn_modules_linear_Linear_head_fc_1_Reshape_77int64[2]shape〈2〉Cast_inlfunc_torch_nn_modules_linear_Linear_head_fc_1_Cast_83DynamicQuantizeLinear_inlfunc_torch_nn_modules_linear_Linear_head_fc_1__to_copy_145_QuantizeLinearMul_inlfunc_torch_nn_modules_linear_Linear_head_fc_1_MatMul_85_quant_scales_mulfloat32B = 0.78823530…MatMulInteger_inlfunc_torch_nn_modules_linear_Linear_head_fc_1_MatMul_85_quantuint8[768,11160]B〈768×11160〉uint8b_zero_point = 94Cast_inlfunc_torch_nn_modules_linear_Linear_head_fc_1_mm_36_output_quantized_castMul_inlfunc_torch_nn_modules_linear_Linear_head_fc_1_MatMul_85_quant_output_scale_mulCast_inlfunc_torch_nn_modules_linear_Linear_head_fc_1_Cast_86Cast_inlfunc_torch_nn_modules_linear_Linear_head_fc_1_Cast_87Mul_inlfunc_torch_nn_modules_linear_Linear_head_fc_1_Mul_88Mul_inlfunc_torch_nn_modules_linear_Linear_head_fc_1_Mul_89float32[11160]B〈11160〉Reshape_inlfunc_torch_nn_modules_linear_Linear_head_fc_1_Reshape_92int64[2]shape〈2〉Add_inlfunc_torch_nn_modules_linear_Linear_head_fc_1_Add_93float32[11160]B〈11160〉Reshape_inlfunc_torch_nn_modules_linear_Linear_head_fc_1_Reshape_96int64[2]shape〈2〉Reshape_inlfunc_torch_nn_modules_linear_Linear_head_fc_1_Reshape_99int64[2]shape〈2〉head_1float32[1,11160]